Professional Documents
Culture Documents
McIntire Iupui 0104D 10734
McIntire Iupui 0104D 10734
Emily S. McIntire
January 2024
Accepted by the Graduate Faculty of Indiana University, in partial
fulfillment of the requirements for the degree of Doctor of Philosophy.
Doctoral Committee
______________________________________
Barbara Manz Friesth, PhD, RN, Chair
______________________________________
Susan Hendricks, Ed.D, RN, CNE, ANEF
______________________________________
Deanna Reising, PhD, RN, ACNS-BC, FAAN, FNAP, ANEF
______________________________________
Joshua Danish, PhD
ii
© 2024
Emily S. McIntire
iii
ACKNOWLEDGEMENT
chair – Dr. Hendricks, Dr. Reising, Dr. Danish and Dr. Manz Friesth. Your thoughtful
questions and expert input helped me grow intellectually and emotionally, and I am
profoundly grateful for your time, engagement, and support! Special, heartfelt thanks to
through this journey! We have shared some challenging times – we made it through this
during a pandemic, and we’ve both experienced great losses. I cannot thank you enough
for staying by my side in this process, guiding me with patience and pushing me with
firm (and yet still gentle) kindness. Your support, expertise, collaboration, time, attention,
and encouragement are just a few words I could use to express what you have done to
help me through this process, but they hardly capture the depth of my gratitude and
genuine appreciation. Thank you a million times over! I could not have done this without
you!
I would also like to acknowledge all my colleagues who supported me, inspired
me, and reminded me that I could do it! Special thanks to Andy Greger – Andy, thank
you for helping me when I faced tech problems (even on the weekends) and educating me
on different technology! Thank you for sharing excitement of learning with simulation,
and helping make things happen! And perhaps most of all, thank you for our “therapy
sessions,” and of course, our friendship. I also want to acknowledge our simulation lab
team: Anna, Alexis, Stephanie, Trevor, Jeremy, and Callie. You were the best team ever!
I absolutely value your part in helping me conduct my study, but even more, I value you
each as peers, colleagues, and friends. You are the hardest working and most amazing
iv
group! I have learned from each of you and am in awe of each of your individual
strengths. Thank you, thank you, thank you for being a part of my life! And for riding this
Finally, my family: Tom, Claire and Valerie. This was a family commitment, and
I just cannot describe how much your support has meant to me. From taking over dinner
believe in you!” or “You can do it!” sentiments, your part in this is immeasurable. Girls –
thank you for being so patient through all of this, allowing me to still be your mom while
investing so much time in my education. Tom – thank you for being there, for stepping
in, and for taking over when I just couldn’t handle one, more, thing! Thank you for the
breaks and coordinating vacations. Thank you for listening to my frustrations. Thank you
for everything you did and were during these past five years! We did it! And I am
looking forward to our next adventure (just no more school! I promise, for real this
time!)!
v
Emily S. McIntire
It is essential that nurses independently assume patient care, yet new nurses lack
necessary clinical judgment skills. The purpose of this study was to examine a simulation
pre-brief scaffold to support nursing students’ clinical judgment development and clinical
judgment independence.
single evaluator blinded to condition using the Lasater clinical judgment rubric (LCJR)
intervention group had higher clinical judgment scores during simulation (n = 31, M =
t(65) = -2.653, p < .01. A significant relationship for the noticing and responding
subscales of clinical judgment was observed between groups, but not for the interpreting
subscale. No significant difference in the number of unintended cues was found between
groups.
vi
Results support that using an I-VRS in simulation pre-brief enhanced clinical
judgment in simulation. The use of the I-VRS adds to the existing limited evidence
to the clinical setting. Additional testing of the PELS-CJI to guide simulation pre-brief is
encouraged.
vii
TABLE OF CONTENTS
viii
Data Analysis ...............................................................................................................61
Demographic Data .................................................................................................61
Descriptive Statistics ..............................................................................................61
Statistical Test – Research Question 1 ...................................................................61
Statistical Test – Research Question 2 ...................................................................62
Statistical Test – Research Question 3 ...................................................................63
Power Calculations and Effect Size .......................................................................63
Limitations ...................................................................................................................64
Summary ......................................................................................................................65
Chapter IV: Results ............................................................................................................66
Demographic Data .......................................................................................................66
Reliability of the LCJR ................................................................................................68
Results of Research Questions .....................................................................................69
Research Question 1 ..............................................................................................69
Research Question 2 ..............................................................................................70
Research Question 3 ..............................................................................................70
Summary of Key Findings ...........................................................................................71
Chapter V: Discussion and Recommendations ..................................................................73
Review of Current Study .............................................................................................73
Problem ..................................................................................................................73
Purpose...................................................................................................................74
Research Questions and Findings ..........................................................................74
Discussion of Findings .................................................................................................75
Effect of the I-VRS Scaffold on Total Clinical Judgment .....................................76
Effect of the I-VRS Scaffold on Individual Components of Clinical Judgment ...76
Effect of the I-VRS on Cueing and Clinical Judgment Independence ..................78
PELS-CJI and the I-VRS Scaffold .........................................................................79
Time to Develop Clinical Judgment ......................................................................81
The LCJR to Measure Clinical Judgment ..............................................................82
Implications..................................................................................................................83
Limitations ...................................................................................................................84
Recommendations ........................................................................................................84
Conclusion ...................................................................................................................85
Appendices .........................................................................................................................87
Appendix A Lasater Clinical Judgment Rubric ...........................................................87
Appendix B Lasater Clinical Judgment Scoring Sheet ................................................89
References ..........................................................................................................................90
Curriculum Vitae
ix
LIST OF TABLES
x
LIST OF FIGURES
xi
LIST OF ABBREVIATIONS
AL Active learning
Independence
xii
Chapter I: Introduction and Background
Over 10 years ago, the Institute of Medicine released a landmark report on the
future of nursing, calling for re-envisioned health care models emphasizing the need for
nurses to obtain and maintain advanced skills and knowledge to provide safe patient-
centered care. As nurses are key contributors to the safety of patients and positive patient
outcomes, it is necessary to take steps to enhance patient safety and prevent adverse
events and medical errors, which lead to as many as 100,000 patient deaths every year
abundant shortage of experienced registered nurses, with over one million nurses, nearly
2022). Goodare (2017) recognized the alarming trend of new graduate nurses leaving the
nursing workforce soon after graduation due to the stress of the role, and current turnover
rates due to burnout are estimated as high as 37% in some areas of the United States
(Haddad et al., 2022). Alarmingly, the level of fatigue and mental health suffering
resulting from the COVID-19 pandemic is certain to negatively impact nursing retention
rates even more (Lopez et al., 2022). As the future of the nursing workforce seems grim,
it is especially disturbing that thousands of qualified students are turned away from
schools of nursing every year due to a parallel nursing faculty shortage, coupled with
Despite the shortage of nursing faculty, current nurse educators must still prepare
practices where adverse events are avoided (AACN, 2022). Clinical competency in
1
nursing practice involves advanced skillsets and problem-solving abilities, including
observe and assess presenting situations, identify a prioritized client concern, and
generate the best possible evidence-based solutions in order to deliver safe client care”
reasoning. Critical thinking involves the processes of clinical reasoning and clinical
judgment. While critical thinking can occur during clinical situations, critical thinking is
often reflected outside of the patient encounter and involves more broad clinical issues
(i.e., system failures, communication breakdown, team collaboration, etc.) with a more
general impact on one’s nursing practice (Benner et al., 2010). Clinical reasoning is
narrower in scope than critical thinking and is the actual thought processes that lead
patient care decisions at the point of care. During the clinical reasoning process, the
allow them to make informed and safe decisions. The nurse's decisions and how they
respond to the patient’s needs based on the clinical situation are known as clinical
judgments.
With patient safety in mind, health system administrators have argued that new
graduate nurses lack skills necessary for good clinical judgment, and recent data suggests
a mere 9% are ready for practice (Kavanagh & Sharpnack, 2021). New nurse graduates
are not exemplifying the skills (including clinical judgment) to meet the complex
demands of nursing practice required to provide safe patient care (Del Bueno, 2005;
2
Victor et al., 2021). With nurse turnover at an all-time high, it is essential that new
nurses are ready to independently assume patient care, yet new nurses’ preparedness in
competency and readiness for practice is lacking. Nursing students need additional
learning opportunities to enhance their clinical judgment. One of the most common
strategies that nurse educators use to support clinical reasoning and clinical judgment
practice, active learning (AL) instructional methods have been explored to promote
application of content and learner engagement (Candela et al., 2006; Drake, 2012;
Stanley & Dougherty, 2010). Perhaps the leading AL strategy in nursing education is
simulation (Hanshaw & Dickerson, 2020). Simulation in healthcare involves three main
pre-briefing involves “An information or orientation session held prior to the start of a
opportunity to experience events that mimic a real-life situation, and post simulation
clinical events (Lioce, 2020, p. 37). Simulation in nursing education supports learners in
deepening their understanding to better execute and apply psychomotor and cognitive
skills, including clinical reasoning, to inform clinical judgments and meet competency
3
simulation experiences offer an effective solution to prepare nursing students for
independent practice.
application and transfer of complex skills require assisted development (Belland, 2014).
conceptual application and transfer) are applied in simulation experiences to help nursing
cueing) and post-simulation (i.e., debriefing). Debriefing is heavily studied and reported
in the literature, and simulation standards recognize the need for structured debriefing
critical thinking, clinical reasoning, and clinical judgment skills (Dreifuerst, 2009). Pre-
briefing has become an area of interest within simulation research in recent years, and
briefing activities help the learner to begin problem-solving regarding patient care
inexperienced learners better gather pertinent patient information to inform patient care
(McDermott, 2016).
form of reality cues or conceptual cues. Reality cues offer information “…to help the
4
learner interpret or clarify simulated reality through information delivered during the
simulation” (Lioce, 2020, p. 12). Conceptual cues differ from reality cues and involve
patient or role player” (Lioce, 2020, p. 12). Simulation operators provide reality cues to
help clarify learner misconceptions brought forth by the simulated environment (i.e., the
simulation operator might say [speaking as the manikin] “it’s only 9:00” if the student
looks at their watch, when it is, in fact, 2:30). Conceptual cueing, on the other hand, is
done to provide the learner with assistance to help them meet the simulation objectives
(i.e., if the student does not look at their watch, but the time is critical to determine if a
medication is appropriate to give, the simulation operator might say [speaking as the
manikin] “what time is it now?”). Conceptual cueing, then, is like a hint to help the
support (Paige, 2016). Both reality and conceptual cues should be planned and
unintended cues (cues that are not planned or that are not normally present and may
misrepresent a real situation) can negatively influence learning (Biggs et al., 2021;
Jeffries 2015; Paige & Morin, 2013). Learners may become dependent on unintended
cues and unintended cues can cause team communication interruptions. Such disruption
may impede learner independence. To prevent disrupting the learning processes, all
5
Problem Statement
Considering the need to better prepare undergraduate nursing students for the
involving the debriefing scaffold is prominent, and debriefing has demonstrated an ability
exists involving best practices in scaffold techniques before simulation to best support the
student during simulation. More specifically, there is limited literature on scaffolds pre-
Purpose
The purpose of this study is to examine the use of a pre-brief scaffold to support
simulation was measured with the Lasater clinical judgment rubric (LCJR). To measure
6
Research Questions
pre-brief or no scaffold?
brief?
Definition of Terms
terms is presented.
iterative process that uses nursing knowledge to observe and assess presenting situations,
identify a prioritized client concern, and generate the best possible evidence-based
solutions in order to deliver safe client care” (Dickison, Haerling & Lasater, 2019, p. 2).
Clinical Judgment (Operational Definition) - Total score for clinical judgment and sub
scores for noticing, interpreting, and responding on the LCJR. See also Lasater Clinical
7
Clinical Reasoning - The actual thought processes of gathering, analyzing and evaluating
patient and environmental information that lead the nurse to make informed patient care
Conceptual cues (in simulation) - “Information provided to help the learner reach
responses from the simulated patient or role player” (Lioce, 2020, p. 12). Some
conceptual cues in simulation replicate conceptual cues that are naturally present in an
operator) to draw attention to the patient condition and offer the learner assistance to help
Critical Thinking - reasonable, skillful and reflective thinking that informs decisions. In
nursing practice, critical thinking involves the processes of clinical reasoning and clinical
activity” (Lioce, 2020, p. 13) that “promotes understanding and supports transfer of
knowledge, skills and attitudes with a focus on best practices to promote safe, quality
2009).
Lasater Clinical Judgment Rubric (LCJR) - A rubric that measures four components of
8
correlating descriptors (each with associated developmental levels: beginning,
Pre-brief - “An information or orientation session held prior to the start of a simulation
Reality cues (in simulation) - Information provided “…to help the learner interpret or
clarify simulated reality through information delivered during the simulation” (Lioce,
2020, p. 12). Reality cues clarify learner misconceptions brought forth by the simulated
environment.
p. 44).
Unintended conceptual cues (in simulation) - Cues that are not planned or that are not
9
Unintended conceptual cues (in simulation) (Operational Definition) - A count of the
number of cues that are not planned and purposefully integrated into the simulation
(by the simulation operator) to draw attention to the patient condition or otherwise allow
their ability to accomplish tasks/develop skills (on their own) and the potential learning
simulation, and sociocultural theories were explored to inform this research. Simulation
is largely rooted in Kolb’s experiential learning theory (KELT) and the NLN Jeffries’s
simulation theory. Theoretical underpinnings of this study are also informed by Tanner’s
collectively, key elements from each of these theories were interwoven to create a
learning theory, NLN Jeffries’s simulation theory, Tanner’s clinical judgment model, and
presented.
10
Kolb’s Experiential Learning Theory (KELT)
Simulation is largely rooted in KELT. During the experiential learning cycle, the
during the learning cycle and continues through future experiences (Kolb, 1984). Kolb
cyclical process of four stages: the concreate experience, reflective observation, abstract
framework for simulation delivery, where the simulation is the concrete experience, and
occur in the debriefing process, where learners explore their simulation experience and
how they might assimilate their actions into different clinical scenarios (Dreifuerst,
2009). The learning cycle continues with active experimentation which occurs during
debriefing (after the student has participated in the simulation) or in future clinical events.
During simulation events, this cyclical learning pattern allows nursing students to engage
facilitation is the NLN Jeffries (2015) simulation theory. The NLN Jeffries simulation
theory provides precise guidelines for experiential simulation activities. The theory
overarching goal(s) of the simulation, influence the simulation design which involves
11
setting specific learning objectives, fidelity, roles, scenario progression and pre/post
simulation activities. The background and design lead to the simulation experiences,
While the NLN Jeffries’s simulation theory recognizes the background, design
facilitator’s role and other educational strategies for simulation implementation are
primarily highlighted during and after the simulation take place. A successful simulation
educational strategies… and providing appropriate feedback in the form of cues (during)
and debriefing [after] the simulation experience” (Jeffries, 2015, p. 292). When these
The clinical reasoning process that leads to clinical judgments involves four main
Tanner’s (2006) model of clinical judgment in nursing (CJM), as shown in Figure 1. The
concept of noticing is the nurse’s basic assessment of the situation, while interpreting
deciding no intervention is required and reflecting entails evaluating the nursing action
12
interpreting, responding, and reflecting) with 11 total correlating descriptors (each with
The LCJR can be used to support student learning and to evaluate clinical judgment in
simulation and clinical practice. Research involving the use of the CJM and the LCJR is
Figure 1
Note. This model was produced by Christine Tanner, depicting the process of clinical
13
the understanding of and strategies for encouraging developmental growth from simple to
complex processes, acknowledging that environmental signs and resources aid the learner
in problem solving. According to Vygotsky (1978), learning begins within the learner in
accordance with their prior experiences. Vygotsky recognized that learners may become
‘stuck’ in progressing with learning if current beliefs or understandings conflict with the
The ZPD is one of Vygotsky’s most referenced areas of work in learning and is
tasks/develop skills (on their own) and the potential learning that could occur (with
through the ZPD to achieve behavior change and learning development. Viewing
simulation learning through this theoretical lens, the simulation environment, patient
assessment data, medical devices and clinical scenario allow the student to manage
patient care through the experiential process, but learners’ current understandings or
knowledge may impede cognitive growth. The nursing student may require assistance to
Instructional scaffolds to assist the nursing student in moving through the ZPD, then, may
Instructional Scaffolds
broadly considered as support with learning (Belland, 2014). Such support can be
14
accomplished through a variety of modes (including working with a more experienced
person or peer) or by using some sort of tool (computer or document) that would aid the
advancing in their learning process to their full potential and are decreased as the student
can support nursing student growth and development in adopting advanced skillsets, such
as clinical reasoning and clinical judgment (Dickison et al., 2019). Cueing is a scaffold
in and of itself when done intra-simulation, but cues during simulation may interrupt
learning and prevent the learner from moving through the ZPD toward clinical reasoning
and clinical judgment development if they are not intentionally executed. However,
while cueing is a scaffold, other scaffolds can be developed to help learners recognize
cues (Burbach & Thompson, 2014; Thiele et al., 1986). Scaffolds in pre-briefing to help
the learner recognize intentional cues (cues that would be present in clinical practice)
during the simulation, for example, would support movement through the ZPD toward
Building off the work of Kolb, Jeffries, Tanner, and Vygotsky, the PELS-CJI (see
Figure 2) was created to inform simulation pre-briefing. The PELS-CJI considers the
background and design components of simulation to help the learner recognize (or notice)
appropriate cues to allow the student nurse to independently apply safe clinical judgments
15
nurses and gain independence in building clinical judgment, a proposed pre-brief scaffold
presented in the video prompt learner responses using components of the LCJR,
with an I-VRS provides a concrete experience where the student can notice and interpret
comparison and exploration with an expert role model and provides active
the subsequent simulation, where the learner begins the experiential learning cycle a
second time.
time. Using the PELS-CJI, the learner can complete the experiential learning cycle twice
during the entire simulation learning activity. The PELS-CJI, then, engages learners in
repetition of similar clinical practice. Such repetition may help develop learned concepts
that can be more readily accessed for future use (National Academies of Sciences,
simulation theory, Tanner’s CJM, and Vygotsky’s sociocultural theory, the PELS-CJI
framework informs simulation pre-brief design. The PELS-CJI allows for repetitive
hypothesized that the I-VRS would support the learner in their ZPD (pertaining to clinical
16
reasoning skills and clinical judgment) and lead to increased clinical judgment and
demonstrated by increased clinical judgment scores on the LCJR and less use of
Figure 2
CJI)
Significance
To ensure nursing graduates are prepared with clinical reasoning skills to make
supported evidence to guide nurse educators on best scaffolding practices during pre-
17
brief to support the learner in the experiential component during sim is required. This
study will provide nurse faculty information regarding the effect of a pre-brief scaffold
theory and how they influence simulation. Learning, teaching, and assessing clinical
inclusion and exclusion criteria, sample selection, instrumentation, data collection and
data analyses used. Chapter IV presents the findings including demographic and data
analyses for the research questions, and chapter V provides a summary of the study,
education.
18
Chapter II: Literature Review on Learning, Teaching, and Assessing Clinical
Judgment
teaching, and assessing clinical judgment in nursing education from multiple theoretical
lenses. Additionally, a review of the literature related to the practices of scaffold use in
This literature review was developed through multiple searches for key terms
using the CINAHL, ERIC and Education Source databases, with no limitations on
were obtained with the searches involving Simulat* AND (BSN OR Baccalaureate OR
CINHAL, ERIC and Education Source database was conducted, with no restrictions on
dates of publications, using key words such as Learning Theory AND (Education),
Knowing AND (Learning OR Assess*). All literature sources extracted for this review
19
Learning Clinical Judgment from Constructivists and Sociocultural Perspectives
professor stands in front of a large group of students in a lecture hall setting and imparts
their knowledge unto them. These traditional learning experiences originated with
store, alter, and use the information gained (Danish & Gresalfi, 2018). Over time, nurse
cognitive nursing skill development (i.e., clinical judgment), but current nursing
perspectives.
processes where the learner works to make sense (Fosnot & Perry, 2005). It is an
iterative process where exploration, error making, reflection, and community discussions
nursing students learn clinical judgment through clinical practice and reflection. Since
clinical judgment is the result of clinical reasoning where the nurse assesses, interprets,
and organizes external data with internal knowledge, simulation with debriefing is an
environment to support preparation for clinical practice (Huston et al., 2018; Tanner,
2006).
learning where knowledge is gained and changed by experience, which is a focal element
20
constructivist model that informs the use of simulation in nursing education. The KELT
examines the learning process in a cyclical fashion that begins with a concrete experience
allowing practice, and post simulation debriefing provides time for reflection. Abstract
exploring their simulation experience (Dreifuerst, 2009). Finally, the learning cycle
continues with active experimentation during debriefing, where the learners may be
challenged by the facilitator to apply what they have learned to different situations, or
after the simulation, where learners can apply and assimilate what they have learned in
(Tanner, 2006).
immersed in what Tanner (2006, p. 206) refers to as the “social embeddedness of nursing
knowledge,” which is obtained through observation of, and conversations with, other
nurses. Simulation and debriefing allow for the practice and learning of technical and
cognitive nursing processes with and from other healthcare providers in an authentic
physical and social contexts. Learners move through the experiential learning cycle
21
Bang and Medin’s (2010) reflections align with the sociocultural implications of
learning clinical judgment. As Bang and Medin (2010) assert, culture has a large impact
(Danish & Gresalfi, 2018, p. 36). When there is limited environmental immersion,
Engineering, and Medicine, 2018). In other words, the way a learner responds in any
given situation is partially dependent on the traditions of their social environments, and
these traditions may impact their personal cognitive processes (Cress and Kimmerle,
social environment and prior ways of knowing impact their clinical reasoning and clinical
judgment. For example, in the clinical setting, nursing students may encounter new
which affects their comprehension of the current situation (Clapper, 2015). This could
cause inaccurate interpretations and affect their clinical reasoning. The skewed reasoning
may be transferred unto their clinical judgments, which could have negative
consequences for the patient. How to teach students to critically think to inform clinical
judgments is essential (Dickison et al., 2016). A notable reoccurring theme for teaching
22
clinical judgment development involves time to engage in experiences that involve
clinical reasoning components (Cioffi, 2000; Tanner, 2006; Victor et al., 2017; Victor,
2017). Tanner (2006) attributes the novice nurse’s immature thinking processes to their
beginning clinical judgment skills. Having more time in clinical practice, expert nurses
gather more information than novice nurses and recognize and collect more relevant cues
(Burbach & Thompson, 2014; Cioffi, 2000; Hoffman et al., 2009; Levett-Jones et al.,
2010). Novice nurses lack the ability to recognize and collect relevant cues, and they
al., 2010). The novice nurse may think more methodically as they recall theoretical
knowledge that is readily available to the expert nurse. Readily available knowledge
allows the expert nurse to seamlessly integrate the theoretical knowledge with the
experiential knowledge gained in practice (Tanner, 2006). Such enhanced relevant cue
recognition allows the expert nurse to connect cues and predict what may happen to a
patient, allowing for earlier clinical reasoning application and therefore prevention of
asserting that clinical nursing expertise (involving exemplar critical thinking, clinical
reasoning, and clinical judgment) comes not just with time but with “…the refinement of
(Benner, 1982, p. 407). While this notion of time to develop clinical judgment is often
considered over a long-term period (i.e., the duration of one’s career), it can also be
viewed from an acute period (i.e., during a course) (Hamers & Csapo, 1999). Engaging
23
that develop more effective thinking to lead to expertise (National Academies of
Sciences, Engineering, and Medicine, 2018). Victor et al. (2017) noted a pattern of
growth in clinical judgment following repeated clinical exposure in simulation over the
course of a semester, although the growth was not statistically significant. In another
study that measured clinical judgment over the course of the entire nursing program,
statistically significant growth in clinical judgment was observed (Victor, 2017). The
LCJR was used in both studies to evaluate clinical judgment. Victor (2017) reported
strong internal consistency of the LCJR (Cronbach’s alpha .92) and strong agreement
reliability, the LCJR has been used to evaluate clinical judgment, and repeated learning
(Postma & White, 2015). In addition to conducting repeated learning experiences, there
are specific active learning (AL) strategies that have supported clinical judgment
application. They can lead to increased levels of understanding which can influence
future experiences (Chi & Wylie, 2014). Active learning activities are necessary for
24
processing, and analyzing of situations (Candela et al., 2006; Drake, 2012; Stanley &
Dougherty, 2010).
clinical judgment development (Ayed et al., 2022; Costello, 2017; Fogg et al., 2020;
Jensen, 2013; Kinyon et al., 2021; Klenke-Borgmann, 2020; Lavoie et al., 2019;
Strickland et al., 2017; Victor, 2017). Of the research on AL experiences and its effect
on clinical judgment, much outcome data is based on perceptions (Fogg et al., 2020;
Jensen, 2013; Kinyon et al., 2021; Strickland et al., 2017). Though AL activities such as
concept maps and case studies have been found effective in supporting clinical judgment
development, these activities lack the appeal of technology of simulation (Blum &
Parcells, 2012). Simulation has undergone perhaps the most rigorous evaluation and is a
commonly used and studied AL activity used in nursing education (Blum & Parcells,
2012; Hayden et al., 2014). Active learning with simulation involves high-fidelity
(Andrea & Kotowski, 2017; Ayed et al., 2022; Bambini et al., 2009; Cazzell & Anderson,
2016; Fogg et al., 2020; Jensen, 2013; Lavoie et al., 2019; Strickland et al., 2017; Victor,
of the literature specific to simulation and its effect on clinical judgment. The integrative
review resulted in 24 total research articles including mixed methods (n = 1), quantitative
(n = 14) and qualitative (n = 9) studies. Many of the studies in their review provided data
about developing clinical judgment in specific areas of nursing (i.e., geriatrics, pediatrics,
cardiac, etc.) (Brown & Chronister, 2009; Johnson et al., 2012; Shin & Kim, 2014; Shin
et al., 2015a; Powell-Laney et al., 2012). Other studies showed that increased clinical
25
judgment occurs with engagement in multiple simulation experiences (Bussard, 2018;
Shin et al., 2015b; Yuan et al., 2014). Additionally, several studies involving the effects
of simulation debriefing on clinical judgment were recognized (Ashley & Stamp, 2014;
Dreifuerst, 2012; Kuiper et al., 2008; Lasater, 2007a; Lavoie et al., 2013; Mariani et al.,
2013). Klenke Borgman et al. (2020) also found that clinical judgment was measured
using a variety of tools and methods, including the LCJR (8), multiple choice format
exams (5), checklists (1) and thematic analysis of student interviews, journals, and focus
groups. Klenke-Borgmann et. al (2020) identified using simulation in the classroom and
and clinical judgment development. Notably there were no studies discussed in their
review that examined pre-briefing and the effect on clinical judgment development.
additional literature specific to simulation and its effect on clinical judgment has been
an emerging trend in the literature (Kelly et al., 2022; Klenke-Borgmann et al., 2021;
Kool, 2022; Pardue et al., 2023; Rogers & Franklin, 2022). A notable study from
26
While physical engagement is often referenced as AL, there is additional support
that learners can also be actively engaged in simulation as observers (Bates et al., 2019;
Berndt et al., 2015; Hober & Bonnel, 2014; Howard, 2021; Johnson, 2019; MacLean et
al., 2019; Norman, 2018; Reime et al., 2017; Rode et al., 2016). Observers can be
recorded. Video recorded simulations (VRS) are often used in undergraduate nursing
emerged in the spring of 2020 during the COVID-19 pandemic (Palancia Esposito &
Sullivan, 2020). At times during the pandemic, nursing students were not allowed to go
into various healthcare settings. Pre-recorded simulations were often observed online as
a substitute for in-person clinical experiences. While observing VRS became the quick
fix to support continued clinical learning in nursing education during the pandemic, prior
literature involving outcomes of VRS was often based on student perceptions of their
learning or satisfaction with the activity, and did not involve assessment of clinical
reasoning or clinical judgment (Ferguson & Estis, 2018; Herron et al., 2019; Powers,
2020; Williams et al., 2009). Outcome assessment pertaining to clinical reasoning and
clinical judgment with VRS remains limited, although clinical reasoning and clinical
student perceptions and self-assessments (Andrea & Kotwoski, 2017; Bambini, 2009;
27
Fogg et al., 2020; Lavoie, 2019; Roy, 2016). The most common instrument used by
students to self-assess clinical judgment is the LCJR (Andrea & Kotwoski, 2017; Fogg et
al., 2020; Lavoie, 2019). Faculty assessment of clinical judgment is reported at different
phases of simulation and with many different instruments. Instruments used by faculty to
assess clinical judgment during simulation include the LCJR (Bussard, 2018; Coram,
2016; Johnson et al., 2012; Mariani et al, 2013; Reid et al., 2020; Shin & Kim, 2014,), the
Creighton Competency Evaluation Instrument (CCEI) (Hayden et al., 2014; Kidd, 2017;
Page-Cutrara & Turk, 2017), pre-test post-test exams (Powell-Laney et al., 2012), and the
Korean Nurses’ Core Competency Scale (Shin et al., 2015) which is a version of the
simulation and debriefing was obtained with pretest-posttest methods (Dreifuerst, 2012),
qualitative student responses (Ashley & Stamp, 2014; Lasater, 2007a), and worksheet
simulation, it is essential that assessment methods are evidence based (Polit & Tatano
Beck, 2008). Pre-posttest methods may examine cognitive knowledge, though they may
not be a practical way to measure actual clinical judgment, as extensive testing that
determines the efficacy, reliability and validity of faculty developed methods would be
necessary. Developing test questions that measure high order constructs (like clinical
multiple clinical subject and statistical analysis experts (Betts et al., 2019). Further,
while self-report has been the easy way to measure student perceptions of their actions, it
is imperative that we begin to measure clinical judgment with instruments that have
28
strong support theoretically and psychometrically, with instruments that have undergone
rigorous evaluation (Polit & Tatano Beck, 2008). The CCEI and the LCJR are commonly
evaluation.
(C-SEI) (Todd et al., 2008). The C-SEI was developed to provide means for quantitative
thinking and technical skills. A total of 23 items are scored, with a score of one for
competent and a zero for not competent. Hayden et al. (2014) modified the C-SEI for use
in the National Council of State Boards of Nursing (NCSBN) study which supported that
learning. In the modified instrument there were minor wording alterations, and the
critical thinking and technical skills categories were “changed to clinical judgment and
patient safety” to better align with current nursing practice standards (Hayden et al.,
2014, p. 246). The revised instrument is the CCEI and contains 23 items that are
organized into four categories, with nine items specifically correlating to clinical
judgment.
While the NCSBN study is well known within nursing simulation literature, other
researchers have utilized the CCEI to assess clinical competence in simulation as well
(Beman, 2017; Brennan, 2022; Kidd, 2017; Page-Cutrara, 2015; Raman et al., 2019).
The instrument has demonstrated psychometric validity and reliability (Hayden et al.,
29
2014; Manz et al., 2022). Hayden et al. (2014) obtained content validity from 35 expert
faculty who strongly agreed on the content of the tool (M = 3.86, SD = 0.22) and how
comparing faculty evaluations (using the CCEI) of three simulation videos that used the
same scenario with varying levels of clinical judgment proficiency. Rater agreement was
determined at 79.4 percent overall, with Cronbach’s alpha above .90. Manz et al. (2022)
determined content validity of the CCEI in the clinical practice environment by obtaining
survey responses (one [strongly disagree] to four [strongly agree]) from 31 clinicians
(M = 3.38, SD = 0.49) and ease of use (M = 3.35, SD = 0.55). Demonstrating validity and
reliability in both simulation and clinical practice, the CCEI evaluates multiple
Tanner’s Clinical Judgment Model and the Lasater Clinical Judgment Rubric
LCJR, which aligns with Tanner’s CJM. Tanner’s (2006) CJM serves two purposes: 1)
to describe the clinical judgment process as it occurs in practice and 2) to support nursing
faculty and students in identifying areas for clinical judgment growth. According to the
CJM, clinical judgment is influenced heavily by a nurse’s prior experiences (inside and
outside of the clinical setting). While an experienced nurse’s ability to engage in the
clinical judgment process is more fluid than a novice nurse (who lacks clinical experience
applicable to nurses at all levels of experience. The CJM shows the reasoning pathways a
30
nurse uses in complex clinical situations, including noticing, interpreting, responding, and
reflecting. How the clinician notices, interprets, responds, and reflects on a clinical
norms followed in the workplace), and the nurse/patient relationship. When noticing,
interpreting, and responding, the nurse is thinking in action during the clinical situation,
and they are thinking on action during the reflection phase. While going through clinical
reasoning processes (i.e., noticing, responding, interpreting) “the nurse must be cognizant
of the patient’s need through data or evidence, prioritize and make sense of the data…
and come to some conclusion about the course of action” (Lasater, 2007b, p. 497). These
Lasater’s (2007a) earlier research used qualitative student data to demonstrate the
The LCJR provides the nursing student a clear outline of simulation expectations
(pertaining to clinical judgment) and is a tool for nursing educators that supports
(Lasater, 2007b). The LCJR (2007b) focuses and expands on the four components of
developing, accomplished and exemplary. Total clinical judgment scores range from 11-
44, and points are awarded to each descriptor as follows: one point for beginning clinical
31
judgment, two points for developing clinical judgment, three points for accomplished
Psychometric validity and reliability of the LCJR has been established (Adamson
et al., 2011; Adamson & Kardong-Edgren, 2012; Chmil & Larew, 2013; Lasater, 2007b;
Shin & Kim, 2014; Strickland et al., 2017). Adamson et al. (2012) reviewed the
reliability and validity data from three different sources: interrater reliability was
confirmed by Adamson et al. (2011) with an interclass correlation agreement (0.889) and
96% agreement between raters when comparing mean pre-posttest scores; Sideras (2007)
large effect size between student groups; and Gubrud-Howe (2008) also demonstrated
interrater reliability (r = 0.92 to 0.96). These studies reviewed by Adamson et al. (2012)
support the use of the LCJR when measuring clinical judgment in high-fidelity simulation
evidence pertaining to the validity and reliability of the LCJR. They identified published
peer reviewed journal articles and supportive gray literature and confirmed well
established content validity. Victor-Chmil and Larew recognized, though, that the
established reliability data is supported only when used with the undergraduate nursing
graduate students and practicing nurses. As a valid and reliable tool to assess clinical
judgment of undergraduate nursing students in simulation, the LCJR has also been
adapted for use in international research (Kim et al., 2016; Perbone Nunes et al., 2016;
Román-Cereto et al., 2018; Shin et al., 2015a; Vreugdenhil & Spek, 2018).
32
The LCJR has been used to study clinical judgment in a variety of ways, one
compared to nursing faculty’s assessment (Jensen, 2013; Strickland et al., 2017). Jensen
(2013) found that students and faculty score similarly in clinical judgment, but students
often rated themselves higher than faculty, with just three (out of 11) significant
relationships in ratings. Strickland et al. (2017) also noted similar clinical judgment
assessments between nursing students and nursing faculty, with a small positive
0.82). Though statistically significant, the variance accounted for is low, indicating a
weak correlation (Munro, 2005). While both Jensen and Strickland report an agreement
between student and faculty scores, they both note that the students did score themselves
slightly higher than faculty, on average, and discrepancies were thought to be related to
the students’ lack of clinical judgment understanding and evaluation experience. The
minor differences in scores from student and instructor did not interfere with the use of
the LCJR as a tool to stimulate a shared dialogue about the simulated clinical
experiences, but student self-assessment alone, despite use of a validated instrument, may
not provide an accurate assessment of their clinical judgment (Jensen, 2013). Student
support in understanding clinical judgment and the use of the rubric is warranted
The LCJR has been used to assess clinical judgment in several other research
studies. Bussard's (2018) research using the LCJR determined a competency score of
clinical judgment based on specific end of program outcomes, and they stressed the need
to determine such scores if the rubric is used for high stakes purposes. Adamson (2016)
33
evaluated the effect of race/ethnicity bias on rater scoring of clinical judgment with the
LCJR, finding no significant impact (p = .753), further supporting its validity. Victor et
al. (2017) used both the CSEI (performance) and the LCJR (judgment) to examine the
relationship between clinical judgment and performance in both the simulation and
0.87, p <.001) were noted. Similarly, Reid et al. (2020) used the LCJR to evaluate
nursing student clinical judgment in both simulation and hospital settings, finding no
rotations involving only simulated experiences and students who experienced clinical
rotations in the hospital setting. While clinical judgment is more commonly assessed in
simulation and clinical environments, the LCJR has also been used to establish a clinical
judgment assessment in the virtual simulation environment and with journaling responses
Many studies that have used the LCJR assessed clinical judgment progression as
the student engages in simulation and clinical experiences over time (Blum et al., 2010;
Bussard, 2018; Chmil et al., 2015; Fawaz & Hamdan-Mansour, 2016; Leijser & Spek,
2021; Shinnick & Cabrera-Mino, 2021; Victor et al., 2017). Shinnick & Cabrera-Mino
(2021) explain that a predictor of clinical judgment is years of experience, suggesting that
“educators should not expect large improvements in a student’s clinical judgment skills
until the student has further clinical experience as a nurse” (p. 109). Notably, these
conclusions contradict the goal of educators who strive to prepare graduating nursing
students with advanced critical thinking, clinical reasoning and clinical judgment skills to
34
provide safe care (Ashcraft et al., 2013). Despite the notion that clinical judgment takes
years to develop there is some evidence of growth in short periods of time if learners are
provided experiences to foster clinical judgment development (Victor et al., 2017; Victor,
2017).
Heavily used, the LCJR and the CCEI are valid and reliable instruments for
judgment and clinical performance, and assess clinical judgment growth. The CCEI is a
With several components of the CCEI measuring clinical competence, only nine
overall clinical judgment, though it does not determine the level of competence regarding
the varying degrees of clinical judgment. The LCJR not only measures overall clinical
judgment following simulation, the research scarcely focuses on evaluating the effect of
other words, clinical judgment scores are assessed post simulation, but the entire
simulation itself was the intervention. Research assessing the impact of specific
35
Scaffolds
and is applicable to the ZPD (the space where the learner could accomplish tasks/develop
skills if supported appropriately) (Belland, 2014). Some scholars argue scaffolding has
(1978) defined the ZPD as “the distance between the actual developmental level as
determined through problem solving…” (pp. 86-87). Educators have assumed literally
that the learner will indeed accomplish short term gains (as opposed to the long-term
gains Vygotsky’s theory proposes) if they are provided adequate support (Smagorinsky,
2018). But Vygotskian theory and the ZPD supported learning and development that
would stay with learner and continue to grow when applied over time (Smagorinsky,
2018). Essentially, the goal of the scaffold with the novice learner is to encourage
independence within a certain social environment, and the scaffold serves as a mediator
education, a scaffold may be implemented to move the learner through the ZPD and
activities within shared and personal social context, and when designing scaffolds it is
36
area is key to facilitating cognitive growth, competence, and independence so the scaffold
can eventually be removed. With the goal of independence, scaffolds have shown to
competence (Lujan & Vasquez, 2010), writing skills (Motlhaka, 2020), providing
feedback (Barnard et al., 2015), problem solving skills (Tchounikine, 2019), language
skills (Takahashi, 1998), musical development (de Vries, 2005), and teaching (Bliss et
While scaffolds are most notably referenced in regard to child learning and
(Brandenburg, 2021; Devereux & Wilson, 2008; Hardman & Ng’ambi, 2003; Korpi et
al., 2019; MacLeod & van der Veen, 2020; Meijerman et al., 2016; Neville, 2018;
Nichols & Nichols, 2006; Roberts, 2018; Rotsaert et al., 2018). Hardman & Ng’ambi
(2003) report usage of a computer-based scaffold they designed for learners in a Bachelor
of Education program, and findings from a study by Rotsaert et al. (2018) support
2021; Roberts, 2018) and engaging reading assignments (Devereux & Wilson, 2008).
Nichols & Nichols (2006) report scaffolding strategies to empower American Indian
consumer sciences. Scaffolds have also been used to improve motivation and promote
reflective practice (to support professional growth) for dental students (Meijerman et al.,
2016; Neville, 2018). Self-reflection and peer group reflections (with instructor support)
37
were successful scaffolds in physiotherapy education (Korpi et al., 2019), and scaffolds
interdisciplinary team project for students from mathematics and engineering (MacLeod
2018). The effects of scaffolds vary, but a cohesive notion is that intentional scaffolds
can promote successful outcomes where the learner progresses towards independent
conceptual application (Coombs, 2018). Eventually, the learner will be in a place where
the scaffold is no longer needed (Boblett, 2012). With the ability to support varying
learner educational levels and outcome goals, scaffolding practice is also present in
nursing programs.
learners with both affective and cognitive functions. For example, the use of art was used
identity and what nursing meant to them (Hydo et al., 2007). Students were provided
with question prompts and participated in small group discussions to promote reflective
their view. Through qualitative inquiry, Hydo et al. deduced that the artistic creations
helped them gain self-awareness. The art scaffold supported heightened awareness that
the students may not have been able to achieve on their own. Another example of a
38
specific scaffold in a didactic nursing setting is the use of a social media platform
(Mistry, 2011). Connecting faculty and students on Twitter (now known as ‘X’) allowed
for supportive interactions that enhanced learning and reflection (Mistry, 2011).
Additionally, scaffolds were used to aid in the development of nursing students’ abilities
2020). Sakraida (2020) used three different scaffolds (exemplar articles, exemplar
appraisals, and guided written reflections) over a course semester to help learners achieve
are present in nursing classrooms but are not specifically referred to as scaffolds.
Examples of interventions in nursing classrooms that are not explicitly called scaffolds
include mindfulness sessions (to reduce stress and anxiety), interactive puzzles (to
promote safe medication administration), group debates and video recorded simulations
(to enhance critical thinking skills) (Hadenfeldt et al., 2021; Leslie et al., 2020; Nurakhir
et al., 2020; Sharpnack et al., 2013). Whether through stimulating engagement, creating
completed with the aid of other persons or tools to guide them toward achievement, thus
fitting the scaffold paradigm. Another space in nursing education where scaffolds are not
explicitly defined as such is simulation. One of the leading scaffolds used in simulation
39
Post-Simulation Scaffold: Debriefing
The debrief scaffold of simulation has been highly studied and has demonstrated many
benefits that support essential nursing skill development. While the simulation debrief is
not the focus of this dissertation, a few examples are shared to demonstrate how
Tutticci et al., 2018). The reflective thinking in debriefing promotes reflective practice,
which aids learners in assessing their own learning needs, an essential component of
nursing practice (Benner et al., 2010). Additionally, a structured debrief after simulation
skill/knowledge transfer and can help learners to develop critical thinking and critical
reasoning skills necessary for clinical judgment (Decker et al, 2021; Dreifuerst, 2009;
Hines & Wood, 2016). A seasoned debriefer can help students take what was done in
simulation, apply and assimilate it to different situations, and build schema of nursing
meaning of a scaffold, debriefing in nursing simulation provides the novice learner with
an expert guide to help them reflect and develop essential nursing practices including
40
Intra-Simulation Scaffolds
In addition to debriefing, there are some examples of scaffolds used during the
Najjar et al., 2015). Using a grounded theory approach, Najjar et al. (2015) explored
their learning was enhanced while observing their peers in simulation, and that watching
others during a simulation helped increase knowledge and skill development (Najjar et
al., 2015). Students also perceived that working with a peer intra-simulation provided
Standardized patients (SPs) (actors portraying a patient in the simulation) are also intra-
simulation scaffolds. The use of SPs promoted empathetic care in novice nursing
2020). Standardized patients who offered positive feedback increased nursing students
self-assessed clinical judgment (Andrea & Kotowski, 2017). Sometimes the feedback
cueing. Cueing in simulation, though, is a complex concept that has multiple purposes
and can come from sources other than the SP and is explored further.
helps the learner achieve the simulation objectives (conceptual cues) or helps the learner
interpret the fidelity of the simulation (reality cues) (Paige, 2016). Importantly, cues used
in simulation are intended to be supportive, and they must not “interfere with [the
learner’s] independent thought” (Jeffries & Rogers, 2007, p. 29). A cue can emerge from
41
the equipment, environment or from character (i.e., SP, patient, embedded participant)
Morales and Hagler (2022) completed a scoping review of literature on how nursing
students recognize cues in simulation and found that most studies (n = 16/17) examined
the relationship between cue recognition and patient deterioration. Missed cue
recognition leading to a decline in patient status was the most prominent theme noted.
what cueing is in simulation education (Jeffries, 2005; Alessi, 2000; Adames et al., 2008;
Dieckmann et al., 2010). There are some studies that discuss unintended negative
distracting to learning and it is important that both reality and conceptual cues are
planned, piloted and intentional so as not to negatively influence learning (Biggs et al.,
2021; Jeffries, 2015; Paige & Morin, 2013; Paige & Morin, 2016). Escher et al. and
Adams et al. (in Paige & Morin, 2013) found that cueing during simulation can
negatively impact student team communication and collaboration, and when cuing is
affect the learners’ buy-in of learning with simulation activities. Other undesirable
outcomes of conceptual cue use include over controlling or interfering with students’
learning processes. Biggs et al. (2021) recognized how bias, formed as a result of
unintentional cueing in lethal force training, can impact future decision making. Biggs et
al. (2021) asserted that learners would begin to depend on cues that would not be present
42
naturally and that unintended cues can create a predictive tendency that learners may
apply in real life situations. The absence of an unintended cue that was present in
simulation, then, may impact the noticing of and responding to patient conditions, affect
decision making processes (clinical reasoning), and ultimately impact clinical decisions
familiarity (of medical conditions) and cue recognition instruction (Burbach &
Thompson, 2014; Thiele et al., 1986). Thiele et al. (1986) determined that carefully
learners’ ability to recognize and sort cues related to patient situations. Burbach &
Thompson (2014) determined that nurses in clinical practice recognize and interpret cues
noticing and responding to cues in simulation is a great challenge for nursing students.
(Poledna et al., 2022). Teaching clinical judgment, beginning with noticing prominent
cues, is critical to prevent poor decision making (clinical reasoning) and clinical
judgments. Despite the importance of cue recognition in patient care, additional research
43
Pre-Brief Scaffolds
the simulated encounter (McDermott et al., 2021). Pre-brief scaffolds, within the
conceptual definition in this dissertation, are temporary supports for developing a deep
understanding that facilitates conceptual application and transfer. It is noted that some of
the literature involving pre-briefing describes interventions which may or may not be
Regardless of the term used, pre-brief design is considered vital to successful simulation
learner perceptions of pre-brief concepts. Kim et al. (2017) studied the effects of pre-
briefing on student’s perceived flow of the simulation that followed, while Solli et al.
(2020) explored student perceptions of the overall role of the pre-brief facilitator. Other
(Anderson et al., 2022). There are two studies which examine the learners’ perceived
effect of pre-brief (fictional contracts) on psychological safety (Stephen et al., 2020; Roh
& Jang, 2017). Notably, the evaluation criteria involved only student perceptions, and
such self-rated perceptions are highly subjective (Polit & Beck, 2008).
44
While subjective student perceptions are valuable in providing a more holistic
view when analyzing outcome data, limited other research has been conducted using
(2021) study, students in the treatment group viewed a video of an expert role model
completing the psychomotor skill they would complete during the upcoming simulation.
developed skills checklist than students who did not view the video in the pre-brief.
While no reliability or validity was offered for the checklist it was modeled after a skills
assessment book used by the institution and there was one trained rater, blinded to
participant assignment, who evaluated the skill performances to eliminate concerns for
inter-rater reliability. Beman (2017) also used objective measures to evaluate the effect
of pre-brief scaffolds. In Beman’s (2017) study, the CCEI was used to evaluate clinical
simulation (Notarnicola et al., 2016). Regardless of the scaffold employed (standard pre-
brief, care planning, or concept mapping), there were no significant changes in clinical
clinical competencies between participants in the two simulation scenarios used. This
result was not unexpected since the interrater reliability was statistically different (Kappa
= 0.096, p = 0.02). Beman also posited the varying scores may be due to faculty
clinical judgment (Page-Cutrara & Turk, 2017; Daley et al., 2017; Sharoff, 2015). Page-
45
Cutrara & Turk (2017) applied reflection theory and concept mapping activities to
provide a structure to simulation pre-brief and used the CCEI to evaluate clinical
(F(1,73) = 74.0, p < 0.001) between the treatment group (students receiving the
structured pre-brief) and the control group was found. Daley et al. (2017) also assessed
and behaviors were coded in alignment with components of Tanner’s CJM (noticing,
of significance was provided between the two groups. Daley et al. (2017) did recognize
that the raters in the study were not blinded to the groups, causing a limitation relating to
the validity of the intervention. Sharoff (2015) explored a variety of pre-brief scaffolds
(i.e., pathophysiology review, images, videos, and handouts), and analyzed student
written responses (post simulation) to questions that were developed by the researcher
using the Tanner’s CJM as a guide. Sharoff evaluated the students’ written reflections
with the LCJR and concluded that students perceived their clinical judgment was
information was shared regarding the researcher developed reflection questions, and
using the LCJR to assess CJ via student written reflections remains underreported.
46
Two additional studies examined the effect of a pre-brief scaffold on nursing
students’ clinical judgment. In both Johnson et al.’s (2012) and Coram’s (2016) studies,
faculty evaluated the students using the LCJR and found improved clinical judgment
scores after viewing of a VRS (with an expert role model performing the simulation)
during pre-brief. Coram reported the validity and reliability of the LCJR from other
previous investigators (Adamson et al. (2012) and Victor-Chmil and Larew (2013)),
though reliability of the LCJR specifically in Coram’s study was not reported. Evaluated
by expert faculty, the treatment group in Coram’s study scored significantly higher (p =
.00) in clinical judgment, receiving scores in the ‘developing’ category, while the control
group received scores in the ‘novice’ category. Johnson et al. (2012) also noted
significant differences in clinical judgment between groups, reporting a large effect size
(Cohen’s d ≥ 1.3), with the treatment group scoring higher in the noticing, interpreting,
and responding components of clinical judgment. While the nurse in the pre-brief video
would express their thoughts out loud in Johnson’s study, the students were handed a
document to read the nurse’s points of clinical reasoning and clinical judgments in
Coram’s study. Though the scaffold differed slightly between the studies, students in the
of different interventions, outcomes, and research designs. With just five studies noted in
the literature to evaluate pre-brief scaffolds’ effect on clinical judgment, and the
47
(Dileone et al., 2020). Of the five studies that examined the effect of pre-brief scaffolds
on clinical judgment, the type of pre-brief scaffolds varied, and the measurement of
clinical judgment has not been consistent. However, Johnson et al. and Coram
investigated a similar intervention (a VRS scaffold) and instituted design methods (i.e.,
blinded reviewers) to enhance the strength of their studies (Polit & Tatano Beck, 2008).
Only Johnson et al., however, explored the intervention at multiple study sites. Despite
Johnson et al.’s effort to obtain a diverse sample from multiple study sites, a convenience
generalizability is limited, the similar design, intervention, and results of Jonson et al.’s
and Coram’s studies offer support of the pre-briefing scaffold of expert role model
Of note, in both Johnson et al.’s and Coram’s studies, two student nurses
participated together in the simulation following the pre-brief scaffold of expert role
model observation in a VRS. While only the lead student nurse’s clinical judgment in the
simulation was evaluated, it is not apparent if the supporting student nurse influenced the
lead student nurse’s clinical reasoning. It also remains unclear if the simulation operator
provided unintentional cues during the simulation experiences. Without this information,
it remains unclear if the pre-brief scaffold of expert role model observation in a VRS
influenced the change in clinical judgment, or if other factors were involved. To direct
development, there is a need to build on the existing data involving the pre-brief scaffold
of expert role model observation in a VRS. The PELS-CJI framework supports the use of
48
an I-VRS during pre-briefing to facilitate increased independent clinical judgment
development.
Summary
assessing clinical judgment. From this review, it is apparent that learning and teaching
experiences, the researchers often present qualitative data (from the student’s or their
own personal perspective). The CCEI and the LCJR offer a means for quantitative
competence.
With tools to assess clinical judgment in simulation learning experiences, the use
exploration. Scaffolds are teaching strategies that can strengthen learning and growth,
supporting the learner through the ZPD as they achieve independence. Intentional cueing
and debriefing are common scaffolds in nursing simulation. It is important to recall that
these scaffolds are used during and after participation in a simulation experience, leaving
49
on the PELS-CJI, a scaffold may additionally be situated prior to simulation
during pre-brief. Observation of a VRS prior to simulation has shown to increase nursing
student clinical judgment, though the studies that support this scaffold do not discuss the
effect of the VRS on the degree of independent clinical judgment. With expert clinical
judgment development over time a dominant theme in the literature, herein lies an
learning strategies with simulation that will support the student through the ZPD to
enhance their clinical reasoning skills and promote independent, enhanced nursing
brief scaffold (I-VRS of an expert role model that includes AL principles of prompting,
decision making. The following chapter describes the methodology used to test the
hypotheses, and Chapter IV and V address the study results and implications,
respectively.
50
Chapter III: Methods
The purpose of this chapter is to describe the methodology for this quantitative
through their ZPD toward clinical judgment independence. The research plan is
subjects, instruments used, research questions and null hypotheses, data collection, and
Design
powerful data relating to the research questions. Quantitative measures provide data
regarding variable relationships (Polit & Tatano Beck, 2008). Because the purpose of
this study is to identify the efficacy of a simulation pre-brief to support clinical judgment
independent variable, but also randomization of subjects and a control group (Polit &
Tatano Beck, 2008). To achieve randomization and avoid systematic bias, participants
were randomly assigned group placement. Participants were involved in the simulation
on their assigned clinical day, but a code generator was used to randomly assign
participants to either the control (1) or treatment (2) group upon arrival at the simulation
lab. The code generator was the Android app Random Code Generator to support equal
51
Setting
simulation lab in the college of nursing adheres closely with the International Nursing
of best practice.
Selection of Participants
this study. The convenience sample included two cohorts of nursing students (traditional
[ABSN]). All participants were in their senior year in the nursing curriculum.
Inclusion/Exclusion Criteria
Inclusion criteria involved senior level students who have had prior high-fidelity
simulation experiences. To avoid prior exposure to the study specific simulations and
increased exposure to class content, an exclusion criterion included any student who was
repeating the course. A yes/no question in the demographic survey asked if this was their
first time enrolled in the course. If answered ‘no’ the student was not observed for the
study and data was not included, though they participated accordingly as all other
students.
This research study involved student participants to inform a strategy and future
research in nursing simulation education. The Associate Dean for Research at the
selected college of nursing was contacted via written correspondence to obtain approval
52
for study implementation. Final approval was provided from the college of nursing’s
research review board, the university study site’s IRB, and the Indiana University IRB.
All potential participants were emailed a link to the study consent upon arrival to
the simulation lab, with options to either agree or decline to participate in the study. If
confidentiality all data obtained from Qualtrics reports was securely stored in the cloud
secured data Microsoft Teams drive. While participants who declined to participate were
collection of data occurred on participants who did not consent. Participant privacy was
maintained, and all data was deidentified. Upon arrival to the simulation lab, a teaching
number and oversaw the students enter their assigned participant identification number in
the survey consent. The teaching assistant correlated the assigned participant
identification number to the associated student case in CAE LearningSpace, the software
used to securely record the simulations. The researcher was only granted access to the
Recruitment Procedures
researcher presented the study to all students enrolled in the course one week before the
scheduled simulation. To avoid coercive bias, the class instructor was not present when
participants were informed of the research study. Students were provided the opportunity
53
to consider participation and ask any questions. Students were informed that there were
minimal risks to participating in this study, and that participation or lack of participation
would not impact the student’s course grade. With minimal risk to participants, IRB
exemption was obtained. The accessible population was n = 84, and the actual sample
population was n = 67. The sample population of the TBSN cohort was n = 36. The
Instruments
The LCJR was used to measure participant clinical judgment, one dependent
variable in this study. Clinical judgment refers to the nurse’s decisions and how they
respond to the patient’s needs based on the clinical situation. In this study clinical
judgment was operationalized as the total score for clinical judgment and sub scores for
noticing, interpreting, and responding on the LCJR. The reliability and validity of the
The LCJR focuses and expands on the four components of clinical judgment
accomplished and exemplary. Total clinical judgment scores range from 11-44, and
points are awarded to each descriptor as follows: 1 for beginning clinical judgment, 2 for
developing clinical judgment, 3 for accomplished clinical judgment and 4 for exemplary
clinical judgment (Lasater, 2007b). Overall scores are considered beginning (11),
of the intervention on clinical judgment in simulation, all data was collected prior to the
54
have been necessary to require participants to think out loud during the simulation
experience. Since requiring thinking out loud during simulation can decrease the
was not measured in this study and only the noticing, interpreting, and responding
components of clinical judgment were scored (Burbach et al., 2015). Clinical judgment
scores were thus adjusted by decreasing the maximum score by the maximum points that
would have been rewarded from the removed descriptors of the reflecting component
(four points for each of two descriptors, for a total of eight points), and decreasing the
minimum points that would have been rewarded (one point for each of the two
descriptors, for a total of two points). Clinical judgment scores obtained in this study
(28-36). The Lasater Clinical Judgment Rubric Scoring Sheet (LCJRSS) was used to
record total and subscale clinical judgment scores (Cato et al., 2009) (See Appendix B).
The LCJRSS was modified with permission to exclude the Reflecting components and
transposed into Qualtrics to securely store data. The LCJRSS is attached to the following
link: LCJRSS
Unintended conceptual cues were operationalized as a count of the number of cues that
are not planned and purposefully integrated into the simulation scenario(s), including
operator) to draw attention to the patient condition or otherwise allow the learner to
55
The I-VRS
The I-VRS was developed in accordance with the PELS-CJI. It was developed to
role model (active conceptualization) in a VRS. The Rose Smith simulation scenario
(one of three parallel simulation scenarios) was randomly selected to create the expert
nurse VRS and involved a pediatric client with pneumonia and a surgical site infection.
The simulation used for the expert nurse VRS was not used as a scenario for student
simulations during the study. The expert nurse in the video was a simulation teaching
assistant with > 5 years of experience as a registered nurse and three years of simulation
teaching experience. The expert nurse was provided a script for the simulation
encounter. Key points of care where clinical reasoning informs clinical judgments were
identified in the video. At these key points, PlayPosit software was used to intentionally
stop the video and prompt learner responses to actively engage participants in the I-VRS.
The prompts asked participants to denote their thoughts to the correlating descriptors of
Once the participant entered their responses, the video resumed and the nurse explained
clinical judgment (noticing, interpreting, responding, and reflecting). After the expert
nurse explained their thoughts and rational, the simulation continued until the next point
of care where a clinical judgment is made, and the process occurred again until the
56
Research Questions and Nulls
This study sought to collect data pertaining to three questions. The first research
question was:
pre-brief or no scaffold?
The null hypothesis was: There will be no significant difference total clinical
The null hypothesis was: There will be no significant difference in the clinical
brief?
57
The null hypothesis was: There will be no significant difference in the number of
Data Collection
(Polit & Tatano Beck, 2008). The intervention and data collection procedures were
either the control (1) or treatment (2) group upon arrival at the simulation lab, and
oversaw the students enter their assigned group number in the demographic survey if they
activities and the simulation orientation, were delivered in video format to ensure
consistent delivery of the intervention. At the beginning of each pre-brief video, the
minimize post simulation collaboration and event detail sharing. In adherence to the
Treatment fidelity goals for the design of this study involved treatment dose
delivery consistency (Bellg et al., 2004). To assure an equivalent pre-briefing “dose” all
participants were allotted equal time during the pre-brief. Both the control and treatment
group received identical pre-brief instructions and access to the patient electronic chart,
the simulation scenario and objectives, and correlating readings from the assigned text.
The treatment group was additionally provided with the I-VRS, which took about 20
58
minutes to complete. The control group was instructed to review the correlating text
readings when the intervention group was completing the I-VRS. No less than 40
minutes were allowed for pre-briefing which was completed in the simulation lab.
Additional steps were taken to maintain treatment fidelity goals of this study
including training of the those involved in the study implementation and simulation
delivery. All simulation operators involved in the study were familiar with the simulation
cases which had been implemented several times during previous semesters. Simulation
educator [CHSE]) occurred two weeks prior to the study implementation to ensure
standardization of simulation delivery. During the training session, the entire study
activities, and simulation delivery. Simulation operators took turns role playing the
student participants and the simulation operators multiple times. To avoid potential
influence of simulation cue delivery at the time of the study, the simulation operators
Simulation Scenarios
Three parallel simulations with different core scenarios but identical objectives
were developed by a team of expert nursing faculty and one simulation faculty who is a
CHSE. While not a focus of this dissertation study, the simulations were developed to
assess student competency based on the course objectives which align with the essential
59
evaluation and revision over a two-year period preceding this study. The three scenarios
pneumonia and 3) Rose Smith – a three-year-old with a surgical site infection (shunt) and
Two high fidelity Gaumard manikins were used for the simulation scenarios: one
pediatric Hal and one infant Hal. The manikins were operated by simulation lab staff that
were trained in the operation of simulation equipment and the INACSL standards.
All student participant simulation encounters were recorded with the CAE-
Learning Space video capture platform. The researcher observed videos of all student
participants completing their simulation. To allow for electronic data management the
LCJR score guide was transposed into Qualtrics Software. To measure the effect of the I-
VRS on conceptual cue recognition, the researcher reviewed all the simulation recordings
to count the number of unintended conceptual cues offered by the simulation operator.
The researcher counted all unintended conceptual cues, using the simulation scenario
60
Data Analyses
Data was analyzed only on participants who completed the demographic survey
and consented to participate in the study. The researcher viewed all participant
management software. Statistical analysis of data was conducted using IBM SPSS
(version 29). The research questions were tested using an alpha of .05.
Demographic Data
Demographic data collected included gender, ethnicity, race, age, program enrollment
(TBSN versus ABSN), and course enrollment information. The demographic survey is
Descriptive Statistics
tendency and dispersion of the data. A mean, range and standard deviation was used to
analyze age of participants. Modes were used to analyze gender, ethnic background, and
nursing program enrollment. Means and standard deviations were used to analyze
continuous data pertaining to clinical judgment scores. Mode was used to analyze
The first research question was: 1) Is there a difference in total clinical judgment
clinical judgment scores (sans ‘reflecting’ component) using the LCJR. The primary
61
researcher was experienced in utilization of the LCJR and scoring guide. They were the
sole evaluator of clinical judgment for this study and were blinded to the study groups.
Reliability and validity of the LCJR has previously been addressed in Chapter II. To
ensure reliability of the LCJR scoring of the simulation used in this study, internal
This study tested for significant differences of total clinical judgment scores
between the control and intervention groups. Dependent upon assumptions being met,
significant differences between two groups (Pallant, 2020). In this study, the dependent
participants engaged in the scaffold during pre-brief and the simulation independently,
with histogram analysis, and homogeneity of variance was identified with non-significant
Levene’s test (significance greater than .05) for equality of variances. Because the
assumptions for parametric techniques were met, an independent t-test was conducted.
The second research question was: Is there a difference in the clinical judgment
interpreting, and responding) of clinical judgment using the LCJR. To test the null
hypothesis that subscale scores of clinical judgment are independent of use of the I-VRS
62
scaffold during pre-brief, ordered metrics were assigned to each of the variables. No
scaffold use was coded as 1, and use of the I-VRS scaffold was coded as 2. Noticing,
accomplished (19-27), and exemplary (28-36). Because there were more than two
ordered categories represented in the subscales results, an ordinal chi-square (also known
as linear-by-linear association test) was utilized to test for differences. The ordinal chi
square can be used to provide greater power by and can detect any type of pattern of
The third research question was: Is there a difference in the number of unintended
scaffold during pre-brief? To test the null hypothesis that there is no difference in the
use of the I-VRS scaffold use or no scaffold during pre-brief, an ordinal chi square test
was conducted.
2020). Type I errors occur when the null hypothesis is rejected when in fact it is true. To
reduce error and increase statistical power, several factors are considered including
sample size, effect size, and alpha level. Johnson et. al (2012) conducted a study that
expert role model who discussed their decision processes was shown to students during
63
interpreting, responding, and reflecting) were measured with the LCJR post simulation
and debriefing. Data was analyzed with Kruskal-Wallis tests. Based on a sample size of
94, a post-hoc power analysis (alpha = 0.05) determined that a sample size of 23
participants in each group (control and intervention) would produce similar results. A
Another study conducted by Coram (2016) involved similar design methods and
interventions. In Coram’s study, a video of an expert role model was shown to students
prior to simulation, but the expert nurse’s decision-making process was provided to
study (n = 43), power calculations (0.97-0.99) and effect sizes (1.18-1.83) were deemed
adequate. Given the similarity in design, it was expected that the sample size of 67
Limitations
Though the design of this study was carefully constructed, some limitations were
present. One significant limitation was that the study was conducted at one college of
nursing, with only students enrolled in a BSN program. The single-site study design can
influence a low impact value (Duffy, Frenn, & Patterson, 2011). Another limitation
presents itself regarding the race/ethnicity of the study sample population. At the chosen
site for this pilot study, most nursing students are Caucasian females. The American
identifying with a race/ethnicity other than Caucasian, and the accessible study
2022).
64
Summary
The purpose of this chapter was to describe the research plan used to answer the
research questions of this dissertation study. The research questions and null hypotheses
were presented, along with the study design, setting, selection of participants, protection
of human subjects, instruments used, research questions and null hypotheses, data
collection, and data analysis procedures. Chapter IV will present the study results.
65
Chapter IV: Results
Chapter IV includes the results of this dissertation study which examined the use
development. The demographic data that describes the sample population is shared.
Descriptive statistics are presented to describe the nature of measures of central tendency
and dispersion of the data. Composite measures are presented regarding internal
consistency, reliability, and data distribution. Finally, the results from the research
Demographic Data
The accessible sample of undergraduate nursing students for this study was 84.
The actual sample of undergraduate nursing students for this study was 67. Seventeen
students were excluded from the participant sample for various reasons (e.g., declined to
consent (n = 7), arrived late to simulation resulting in shortened pre-brief period (n = 1),
did not complete intervention in allotted time (n = 3), and technical difficulties during
simulation (n = 6).
enrollment (TBSN versus ABSN), and course enrollment information was collected.
as Asian. Participants ranged in age from 21 years to 29 years, and the mean age was
22.45 (SD = 1.579). A total of 53.7% (n = 36) were enrolled in the traditional
baccalaureate of science in nursing program, and 46.3% (n = 31) were enrolled in the
66
accelerated baccalaureate of science in nursing program. While overwhelmingly female
and white, the distribution across groups was similar and representative of the Midwest
university’s enrollment in the college of nursing. See Table 1 for the grouped frequency
distribution of participants.
Thirty-six of the sample population were randomly assigned to the control group
and 31 participants of the sample population were randomly assigned to the treatment
desired, though eight of the ten participants that were excluded had been assigned to the
Max Smith case, resulting in the following: of the 36 participants in the control group, 14
participated in the Max Smith case and 22 participated in the Tiffany Smith case. Of the
31 participants in the treatment group, 10 participated in the Max Smith case and 21
67
Since the ‘reflecting’ category of clinical judgment was not scored in this study,
clinical judgment scores were adjusted to reflect total subscales of the three components
(noticing, interpreting, and responding). The resulting categories were beginning (9),
groups had mean clinical judgment scores aligning with the accomplished category of
assigned to the Max case had a mean clinical judgment score aligning with the exemplary
category of clinical judgment. Participants in the treatment group assigned to the Tiffany
case had a mean clinical judgment score that aligned in between the accomplished and
exemplary categories (see Table 2). When means for treatment and control groups
overall were calculated, the control group had mean clinical judgment scores aligning in
the accomplished category and the treatment group had mean clinical judgment scores
Scoring was completed by one evaluator for the total clinical judgment scores
(sans ‘reflecting’ component) using the LCJR. To assess for reliability of the LCJR,
internal consistency was analyzed using Cronbach’s alpha. The resulting Cronbach’s
alpha for this study was .932, indicating a strong level of internal consistency of the
68
Results of Research Questions
Each research question of the study is reviewed below. The analyses used and an
explanation of how assumptions were met is shared. Tests used to confirm normality are
Research Question 1
The first research question was: Is there a difference in total clinical judgment
distribution of scores (Pallant, 2020). The total clinical judgment score is a continuous
interval variable and participants were randomly assigned to either control or treatment
groups. The Levene test statistic was analyzed to test for equality of variance and was
violated. The assumptions for parametric tests were met, and the independent t-test was
used to analyze for significant differences of clinical judgment scores between groups.
their clinical judgment in simulation. The results of the independent samples t-test
showed that participants who received the I-VRS scaffold during pre-brief had elevated
to participants in the control group (n = 36, M = 25.06, SD = 5.275), t(65) = -2.653, p <
The Cohen’s d statistic was used to analyze the effect size of the total variance in
the dependent variable that is predictable from knowledge of the levels of the
69
independent variable and the resulting statistic was (-.650). A Cohen’s d of (-.650)
Research Question 2
The second research question was: Is there a difference in the clinical judgment
clinical judgment using the LCJR. Because there were three ordered categories
represented in the subscales results, an ordinal chi-square was used to test for differences
between groups. For the noticing subscale of clinical judgment, the resulting statistic was
relationship for both the noticing and responding subscale scores of clinical judgment and
the use of a scaffold during pre-brief, but not for the interpreting subscale component.
Research Question 3
The third research question was: Is there a difference in the number of unintended
scaffold or no scaffold during pre-brief? An ordinal chi square test was used to test for
differences between treatment and control groups (X2 = 2.828, df = 1, p = .093), showing
70
TABLE 4: Variable Impact on Unintended Conceptual Cues
Variable Impact Chi Square df Asymptomatic
Statistic Significance (two-sided)
UCC*group 2.828 1 .093
difference was present. To see if other variables had an impact on the number of UCC
provided between participants in the treatment and control groups, further tests were
completed. An ordinal chi square test was conducted to assess for differences in the
number of UCC when accounting for the simulation case that was used, the simulation
operator, or the participants’ most recent clinical setting. The results similarly indicated
Given the gathered demographic data, it is apparent that most participants in this
study identified as white females with a mean age in their early twenties. Based on the
statistical analysis of data, we reject the null hypothesis for research question one, and
71
students engaged in simulation when differentiated by use of a scaffold during pre-brief
or no scaffold. When examining the individual components underlying CJ, there were
differences in the clinical judgment components of noticing and responding, but not in
the component of interpreting. Finally, the I-VRS did not appear to support a difference
in the number of unintended conceptual cues provided during simulation in this study.
The following and final chapter will present a discussion of these results in context with
current literature along with recommendations for future research, limitations and the
72
Chapter V: Discussion and Recommendations
framework, the concept of time and clinical judgment development, and the use of the
LCJR to evaluate clinical judgment are presented. The implications and limitations of
this dissertation research study are also explored, along with recommendations for future
research. A review of the current study including the problem, the purpose, and research
present with the clinical judgment skills necessary to provide safe care (Kavanagh &
Sharpnack, 2021). Nurse faculty must ensure education experiences prepare nursing
students with necessary skills, including safe clinical judgment. Simulation is a learning
strategy that has shown promise in supporting clinical judgment development. While
Problem
practice in nursing education, and the debriefing component of simulation has been
73
Purpose
The purpose of this study was to examine the use of a pre-brief scaffold to support
without a researcher developed scaffold, the I-VRS. Clinical judgment in simulation was
measured with the LCJR, and the number of UCC during simulation were counted to
There were three research questions. The first research question was 1) Is there a
results of this study supported that the use of the I-VRS prior to simulation demonstrated
elevated mean total clinical judgment scores between control and intervention groups.
The second research question was 2) Is there a difference in the clinical judgment
relationship between the I-VRS and both the noticing and responding subscale scores of
clinical judgment, but not for the interpreting subscale component. The third research
question was 3) Is there a difference in the number of UCC provided during simulation
there here was no statistically significant difference in the number of UCC provided
74
during simulation when differentiated using the I-VRS prior to simulation. A discussion
Discussion of Findings
A discussion of findings as they relate to the effect of the I-VRS during pre-brief
on total clinical judgment, individual components of clinical judgment, and cueing and
current literature. Further discussion on the use of the PELS-CJI framework to guide pre-
brief scaffold development, the concept of time and clinical judgment development, as
New graduate nurses are not currently equipped with clinical judgment skills
necessary for safe practice, and the limited literature reporting the effect of pre-brief
outcome measures. Only five studies were found in the literature which studied pre-brief
scaffolds’ effect on clinical judgment (Coram, 2016; Dailey et al. 2017; Johnson et al.
2012; Page-Cutrara & Turk, 2017; Sharoff, 2015). The scaffolds in these studies were
varied and involved concept mapping, pathophysiology review, images, videos, handouts,
and VRS. Additionally, these studies were inconsistent in their measures to evaluate
clinical judgment, utilizing the CCEI, coding methods, reflection evaluation, and the
LCJR. The varying scaffolds and outcome measures make it difficult to draw evidence-
based conclusions from existing data to inform best practices in scaffold use in pre-brief
to develop clinical judgment. There are two reports by Coram (2016) and Johnson et al.
(2012) that used a similar scaffold (a VRS) and evaluation measure (LCJR).
75
This dissertation study adds to the small body of existing evidence of using a VRS
during pre-brief to support clinical judgment development. The results of this study
suggest that the I-VRS pre-brief scaffold supports overall clinical judgment development
during simulation. These findings are consistent with those from research conducted by
Johnson et al. (2012) and Coram (2016) where using a VRS during simulation pre-brief
control groups. In the current study, mean clinical judgment scores for those in the
treatment group demonstrated exemplary clinical judgment. Coram found similar results,
though the clinical judgment scores indicated a beginning clinical judgment rating for
those in the control group and a developing clinical judgment rating for those in the
treatment group. Though the overall ratings between this study and Coram’s study varied
participants in Coram’s study were in their first medical surgical course, compared to the
participants in this study who were in their senior year in a nursing program. Recalling
that time to develop clinical judgment is a common theme in the literature, the fact that
the participants in this study have heightened clinical judgment compared to the
noticing and responding components of clinical judgment, but not to the interpreting
76
mean scores between treatment and control groups for all three components of clinical
judgment, noticing, responding and interpreting when using a VRS in simulation pre-
brief. Their study design included recorded video analysis of the simulation and
debriefing, allowing for the participants’ thought process to be examined. The lack of
thought process explanation in this study may explain the different results of the
interpreting component of clinical judgment between this study and Johnson’s study.
With few expressions of thought process, it should be considered that the interpreting
The addition of the evidence from the current study when considered with
Coram’s and Johnson’s findings support using a VRS with an expert role model during
pre-brief to enhance overall clinical judgment, and the specific components of clinical
judgment, noticing and responding. Though the pre-brief intervention varied slightly in
all three studies, a common denominator in Johnson’s study, Coram’s study, and this
study is that AL principles were applied with an expert nurse in a VRS before the
judgments were explained to the viewer. In other words, how the expert nurse clinically
reasoned through the information presented in a clinical scenario, and how their clinical
reasoning led to clinical judgments, was shared. How the expert nurse shared their
thought processes (i.e., their clinical reasoning) varied slightly between studies. Whether
via verbal explanation (Johnson) during the VRS, a written supplement provided to the
student while observing the VRS (Coram), or I-VRS with PlayPosit (in this dissertation
77
study), the use of a VRS with expert nurse clinical reasoning explanations supports
While there was no significant difference between groups regarding the number
of UCC provided by the simulation operator, this finding holds value. Decreased UCC
judgment, however, there is not enough evidence in the literature about cueing to
interpret a similar cue count between groups negatively. The literature that is available
on cueing in simulation largely reflects conceptually defining and understanding the term
itself (Jeffries, 2005; Alessi, 2000; Adams et al., 2008; Dieckmann et al., 2010). A
scoping review of cueing in nursing simulation revealed missed cue recognition as the
most prominent theme amongst studies on cueing (Poledna, et al., 2022). Cueing’s
In clinical practice, early cue recognition is a critical skill of expert nursing care,
and the action of gathering cues is the first step to formulating clinical judgments
(Benner, Tanner & Chesla, 2009; Hammond et al. in Burbach and Thomson, 2014). One
prominent differentiator between novice nursing students’ and expert nurses’ cue
recognition is that the novice holds equal value to cues and looks at them in a sequential
manner. Such interpretation can lead to missed priorities in providing care. It is possible
that participants in the treatment group in this study, though requiring the same amount of
cues as participants in the control group, were better at recognizing cues and
differentiating their importance. In other words, they may have noticed the same amount
of cues but could more appropriately interpret their importance and respond accordingly.
78
To examine this notion further, we explored if there were differences in the number of
cues provided and other variables (simulation operator, simulation scenario, and most
variables that could explain the difference in clinical judgment between participants in
the treatment and control groups, and supporting the thought that those who engaged with
the I-VRS were more able to independently recognize conceptual cues. With total
clinical judgment improved among participants in the treatment group in this study,
additional research on cueing is necessary before discounting the use of the PELS-CJI
clinical judgment.
available in the literature, the PELS-CJI framework was developed to inform the I-VRS
pre-briefing activity. Jeffries (2015) calls for pre-brief methods be researched for
effectiveness, and effective research must be supported with theory (Polit & Tatano Beck,
2008). While research has demonstrated that applying a model of experiential learning
recognize theoretical frameworks used to inform simulation design (Chmil et al., 2015;
Mariani, Fey & Gloe, 2018). The PELS-CJI offers a potential framework to inform the
subsequent transfer and application of learned concepts during simulation. The results of
79
this study showed more advanced clinical judgment in the treatment group. This finding
supports the need to further test the PELS-CJI to inform clinical judgment development.
Chapter II, the concept of educational scaffolds was discussed. Critical elements of
scaffold design include individualized tailoring to learner needs with the goal being to
While the I-VRS shared in this study was identical for each participant, the interactive
features of the PlayPosit provided participants with options to go at their own pace, to
revisit confusing concepts, and to respond to and reflect as much or as little as they
preferred, creating a tailored experience. There were three participants in this study who
were not able to complete the I-VRS activity in the allotted time. Extended time should
this study were presented the I-VRS scaffold during pre-brief, and then had the
without the scaffold present. In other words, the I-VRS in pre-brief supported learners in
moving beyond their current ability, through their ZPD, to a place of heightened potential
development in clinical reasoning and clinical judgment during the simulation. These
80
Time to Develop Clinical Judgment
clinical judgment development is also warranted. The results of this study showed
increased clinical judgment for participants who received the I-VRS during pre-brief just
prior to the simulation experience. In other words, a relatively small amount of time
judgment growth.
But clinical judgment is not learned in a linear fashion in nursing curriculum, and
education literature highlights clinical judgment development progression over the course
of a semester or entire curriculum (Blum et al., 2010; Bussard, 2018; Chmil et al., 2015;
Leijser & Spek, 2021). In addition to time to develop pertinent subject matter
experiences (Bussard, 2018; Fawaz & Hamdan-Mansour, 2016; Victor et al., 2017).
While literature describes time and experience as necessary factors to develop expert
clinical judgment, the results of this study support enhanced clinical judgment almost
and the actual simulation experience, more effective thinking during the simulation
and sociocultural learned concepts by engaging with the expert nurse in the I-VRS
that other I-VRS’s (that correlate with the learner’s current subject matter awareness)
81
The LCJR to Measure Clinical Judgment
Because clinical judgment involves the individuals’ prior experiences and other
factors, it is important to consider the LCJR is not actually intended to measure clinical
judgment (Tanner, 2006; Lasater, personal communication, March 12th, 2023). Rather, it
is designed to “describe the trajectory of students’ clinical judgment” over time (Lasater,
group had the opportunity to compare their personal beliefs and prior experiences to the
expert nurse’s thinking in the I-VRS, which may have corrected any misunderstood
concepts previously learned. Since the LCJR was used for this study to assess the impact
between groups, it can be suggested that the LCJR did offer a measure of clinical
and responding, but not interpreting. The researcher encountered difficulty evaluating
the interpreting component of clinical judgment as the LCJR defines interpreting in ways
processes. While a method referred to as ‘thinking out loud’ has been used to assess
students’ clinical reasoning processes in simulation, thinking out loud can decrease the
authenticity of the simulation environment and be difficult for students when they are
already being challenged with complex simulation encounters (Burbach et al., 2015). So
as not to impact the fidelity or add more complexity to the students’ simulation
experience, participants in this study were not encouraged to ‘think out loud’.
82
Additionally, the simulation debrief was not recorded for evaluation. To better examine
the effect of the I-VRS on the ‘interpreting’ and ‘reflecting’ components of clinical
Implications
nurse educators can develop and research additional I-VRS (and other pre-brief scaffolds)
scaffold development are not apparent in the literature. With additional evidence
to inform pre-brief intervention design would add to the rigor and value on pre-brief
development, the I-VRS could be tailored to the level of learner development as they
Providing double the experiential learning when followed by simulation, with additional
I-VRS pre-brief activities we might expect collectively more advanced clinical judgment
skills from students at the completion of their nursing curriculum. Recalling that time
and experience are contributing factors to clinical judgment development, using I-VRS
preparedness for practice at graduation. Consistent with prior research, an I-VRS before
83
simulation could be routinely implemented as a standard of practice during simulation
Limitations
Though carefully designed to meet research standards, this study does present
with limitations. One limitation is that the I-VRS was one pre-brief scaffold made for
one simulation event. Additionally, the study was conducted at one point in time and
therefore does not demonstrate sustained clinical judgment growth. The degree of
retention and application of clinical judgment to different clinical scenarios has not been
challenging to measure in this study since the participant’s thought processes were not
apparent during the simulation, and debriefing analysis was not conducted. Thus, the
results may not be an accurate reflection regarding the interpreting component of clinical
judgment. Likewise, the reflecting component of clinical judgment was not measured in
this study. Further, the sample population used in this study involved one level of
learners at one college of nursing which limits the generalizability of findings to other
groups.
Recommendations
The limitations recognized in this study elicit several recommendations for future
research. The simulation event in this study did involve two separate simulation
scenarios, however, just two additional studies support the use of a VRS during pre-brief.
I-VRS pre-brief scaffolds involving different clinical cases, samples and populations
84
must be developed and tested to add to the validity of these findings. Additionally, the
effect of the I-VRS on clinical judgment was only measured at one point immediately
following the intervention, with one cohort of learners at one institution. Future research
judgment is retained and transferrable to the clinical setting and if using an I-VRS would
Other research should also include approaches to assess the interpreting and
reflecting components of clinical judgment. To obtain data regarding the effect of the I-
VRS on interpreting and reflecting components of clinical judgment using the LCJR, a
independence in this study (cue counting) is a new concept. Future research should
Conclusion
With limited reports to-date of research studies that objectively assess the effects
of pre-briefing strategies, this study provided empirical evidence on the use of an I-VRS,
underdeveloped clinical judgment amongst new nurses and the impact on patient safety,
undergraduate nursing curriculum. The methods of the dissertation research study were
detailed, along with the results which showed an almost immediate clinical judgment
85
enhancement between control and intervention groups when an I-VRS was implemented
as a pre-brief scaffold.
attention to ensure new nurse graduates are better prepared with clinical judgment skills
prior to graduation, and highlights the important charge of nurse educators to support
nursing students in expedient clinical judgment development. This study showed that
simulation. The use of the I-VRS adds to the existing limited body of evidence related to
students.
86
Appendix A
87
Note. This rubric was produced by Lasater in 2007. From “Clinical Judgment
Journal of Nursing Education, 46(11), pp. 500-501. Copyright 2005 by Kathie Lasater,
EdD, RN.
88
Appendix B
89
References
Adams, W. K., Reid, S., LeMaster, R., McKagan, S. B., Perkins, K. K., Dubson, M., &
Available from ProQuest Dissertations and Theses database. (UMI No. 3460357)
Adamson, K. A., Gubrud, P., Sideras, S., & Lasater, K. (2011). Assessing the reliability,
validity, and use of the Lasater clinical judgment rubric: Three approaches.
20111130-03
Adamson, K. A., & Kardong-Edgren, S. (2012). A method and resources for assessing
Agresti, A. (2007). Introduction to Categorical Data Analysis. 2nd edition. Hoboken, NJ:
https://doi.org/10.1177/104687810003100205
90
American Association of Colleges of Nursing. (2022, September). Enhancing Diversity
information/fact-sheets/enhancing-diversity
sheets/nursing-faculty-shortage
https://files.eric.ed.gov/fulltext/ED433735.pdf
Anderson, M., Guido-Sanz, F., Talbert, S., Blackwell, C. W., Dial, M., McMahan, R. P.,
& Díaz, D. A. (2022). Augmented Reality (AR) as a Prebrief for Acute Care
https://doi.org/10.1016/j.ecns.2022.05.005
Andrea, J., & Kotowskit, P. (2017, July). Using standardized patients in an undergraduate
https://doi.org/10.1016/j.ecns.2017.05.003
Ashcraft, A. S., Opton, L., Bridges, R. A., Caballero, S., Veesart, A., & Weaver, C.
122–126.
Ashley, J., & Stamp, K. (2014). Learning to think like a nurse: The development of
525. 10.3928/01484834-20140821-14
91
Ayed, A., Khalaf, I. A., Fashafsheh, I., Saleh, A., Bawadi, H., Abuidhail, J., Thultheen, I.,
https://doi.org/10.1177/00469580221081997
Bambini, D., Washburn, J., & Perkins, R. (2009). Outcomes of clinical simulation for
https://pubmed.ncbi.nlm.nih.gov/19476069/
Bang, M. & Medin, D. (2010). Cultural processes in science education: Supporting the
https://doi.org/10.1002/sce.20392
Barnard, R., de Luca, R., & Li, J. (2014, April 8). First-year undergraduate students’
perception of lecturer and peer feedback: A New Zealand action research project.
https://doi.org/10.1080/03075079.2014.881343
Bates, T. A., Moore, L. C., Green, D., & Cranford, J. S. (2019). Comparing outcomes of
active student and observer roles in nursing simulation. Nurse Educator, 44(4)
216-221. https://doi.org/10.1097/NNE.0000000000000603
Belland, B. (2014). Scaffolding: Definition, current debates and future directions. In J.M.
1-4614-3185-5_39
92
Bellg A.J., Borrelli B., Resnick B., Hecht J., Minicucci D.S., Ory M., Ogedegbe G.,
Orwig D., Ernst D., Czajkowski S.; Treatment Fidelity Workgroup of the NIH
behavior change studies: best practices and recommendations from the NIH
https://dc.uwm.edu/etd/1585/
Benner, P., Sutphen, M., Leonard, V. & Day, L. (2010). Educating nurses: A call for
Benner, P. (1982). From novice to expert. American Journal of Nursing, 82(3), 402-407.
https://doi.org/10.1097/00000446-198282030-00004
Benner, P. (2000). From novice to expert. Upper Saddle River, NJ: Pearson.
Benner, P., Tanner, C.A., & Chesla, A. (2009). Expertise in nursing practice: Caring,
Berndt, J., Dinndorf-Hogenson, G., Herheim, R., Hoover, C., Lang, N., Neuwirth, J., &
Betts, J., Muntean, W., Kim, D., Jorion, N., & Dickison, P. (2019). Building a method for
writing clinical judgment items for entry-level nursing exams. Journal of Applied
93
Testing Technology, 20(2) 21-36. https://www.ncsbn.org/public-
files/Building_a_Method_for_Writing_Clinical_Judgment_It.pdf
Biggs, A. T., Pistone, D., Riggenbach, M., Hamilton, J. A., & Blacker, K. J. (2021,
https://doi.org/10.1016/j.apergo.2021.1034511
Bliss, J. Askew, M., & Macrae, S. (1996). Effective teaching and learning: Scaffolding
https://doi.org/10.1080/0305498960220103
Blum, C. A., Borglund, S., & Parcells, D. (2010). High-fidelity nursing simulation:
Blum, C. A., & Parcells, D. A. (2012). Relationship between high-fidelity simulation and
20120523-01
https://doi.org/10.7916/D84Q86KN
https://doi.org/10.1080/14623943.2020.1821626
94
Brennan, B. A. (2022). The impact of self-efficacy based on prebriefing on nursing
Brown, D., & Chronister, C. (2009). The effect of simulation learning on critical thinking
https://doi.org/10.1016/j.ecns.2008.11.001
https://doi.org/10.3928/01484834-20140806-07
Burbach, B. E., Barnason, S., Thomson, S. A. (2015). Using ‘think aloud’ to capture
Candela, L., Dalley, K., & Benzel-Lindley, J. (2006). A case for learning-centered
https://doi.org/10.3928/01484834-20060201-04
Cazzell, M., & Anderson, M. (2016). The impact of critical thinking on clinical judgment
95
Education Perspectives, 38(3), 119-125.
https://doi.org/10.1097/01.NEP.0000000000000135
Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement
10.1080/00461520.2014.965823
Chmil, J. V., Turk, M., Adamson, K., & Larew, C. (2015). Effects of an experiential
https://doi.org/10.1046/j.1365-2648.2000.01414.x
Clapper, T. C. (2015, March 23). Cooperative-based learning and the zone of proximal
https://doi.org/10.1177/1046878115569044
Clapper, T. C. (2010, January). Beyond Knowles: What those conducting simulation need
to know about adult learning theory, Clinical Simulation in Nursing, 6(1), 7-14.
https://doi.org/10.1016/j.ecns.2009.07.003
https://doi.org/10.1016/j.nedt.2018.06.007
Coram, C. (2016) Expert role modeling effect on novice nursing students’ clinical
https://doi.org/10.1016/j.ecns.2016.04.009
96
Costello, M. (2017). The benefits of active learning: Applying Brunner’s discovery
https://doi.org/10.4324/9781315617572-14
Daley, B. J., Beman, S. B., Morgan, S., Kennedy, L., & Sheriff, M. (2017). Concept
maps: A tool to prepare for high fidelity simulation in nursing. Journal of the
https://doi.org/10.14434/josotl.v17i4.21668
International handbook of the learning sciences (pp. 34-43). New York, NY:
Routledge
Decker, S., Alinier, G., Crawford, S. B., Gordon, R. M., Jenkins, D., & Wilson, C.
https://doi.org/10.1016/j.ecns.2021.08.011
de Vries, P. (2005). Lessons from home: Scaffolding vocal improvisation and song
312. https://doi.org/10.1007/s10643-004-0962-2
97
Del Bueno, D. (2005). A CRISIS in critical thinking. Nursing Education Perspectives,
26(5), 278-282.
Devereux, L., & Wilson, K. (2008). Scaffolding literacies across the bachelor of
https://doi.org/10.1080/13598660801971633
Dickison, P., Haerling, K. A., & Lasater, K. (2019). Integrating the national council of
http://dx.doi.org.proxy.ulib.uits.iu.edu/10.3928/01484834-20190122-03
Dieckmann, P. Lippert, A., Glavin, R., Rall, M. (2010). When things do not go as
10.1097/SIH.0b013e3181e77f74
Dileone, C., Chyun, D., Diaz, D. A., & Maruca, A. T. (2020). An examination of
https://doi.org/10.1097/01.NEP.0000000000000689
Docherty, A., Warkentin, P., Borgen, J., Garthe, K., Fischer, K. L., & Najjar, R. H.
https://doi.org/10.1016/j.profnurs.2018.05.001
98
Dreifuerst, K. T. (2012). Using debriefing for meaningful learning to foster development
333. https://doi.org/10.3928/01484834-20120409-02
Duffy, J. R., Frenn, M., & Patterson, B. (2011). Advancing nursing education science: An
https://doi.org/10.1016/j.nedt.2016.08.026
Ferguson, N. F., & Estis, J. M. (2018). Training students to evaluate preterm infant
https://doi.org/10.1044/2017_ajslp-16-0107
https://doi.org/10.1016/j.profnurs.2017.10.009
99
Fogg, N., Kubin, L., Wilson, C. E., & Trinka, M. (2020). Using virtual simulation to
Goodare, P. (2017). Literature review: Why do we continue to lose our nurses? The
https://www.ajan.com.au/archive/Vol34/Issue4/6Goodare.pdf
http://www.ncbi.nlm.nih.gov/books/NBK493175/
Hadenfeldt, C. J., Naylor, H. M., & Aufdenkamp, M. A. (2021). Escape the pharmacy:
https://doi.org/10.1097/01.NEP.0000000000000742
Luit & B. Csapo (Eds.), Teaching and learning thinking skills (pp. 11-36). Swets
and Zeitlinger.
100
Hanshaw, S. L., & Dickerson, S. S. (2020, July). High fidelity simulation evaluation
Hardman, J., & Ng’ambi, D. (2003). A questioning environment for scaffolding learners’
1130.1
Hayden, J., Keegan, M., Kardong-Edgren, S., & Smiley, R. A. (2014). Reliability and
validity testing of the Creighton competency evaluation instrument for use in the
Herron, E. K., Powers, K., Mullen, L., & Burkhart, B. (2019). Effect of case study versus
https://doi.org/10.1016/j.nedt.2019.05.015
Hines, C. B., & Wood, F. G. (2016). Clinical judgment scripts as a strategy to foster
https://doi.org/10.3928/01484834-20161114-05
Hober, C., & Bonnel, W. (2014). Student perceptions of the observer role in high-fidelity
https://doi.org/10.1016/j.ecns.2014.07.008
Hoffman, K.A., Aitken, L.M., & Duffield, C. (2009). A comparison of novice and expert
101
International Journal of Nursing Studies, 46(10), 1335–1344.
https://doi.org/10.1016/j.ijnurstu.2009.04.001
https://doi.org/10.1016/j.nedt.2018.02.013
https://doi.org/10.1097/01.NEP.0000000000000603
Huston, C. L., Phillips, B., Jeffries, P., Todero, C., Rich, J., Knecht, P., Sommer, S., &
Hydo, S. K., Marcyjanik, D. L., Zorn, C. R., & Hooper, N. M. (2007). Art as a
https://doi.org/10.2202/1548-923x.1330
https://doi.org/10.1016/j.ecns.2016.09.008
102
Jarvill, M. (2021). Nursing student medication administration performance: A
https://doi.org/10.1097/NNE.0000000000000828
https://doi.org/10.1016/j.nepr.2012.07.001
Johnson, B. K. (2019). Simulation observers learn the same as participants: The evidence.
https://doi.org/10.1016/j.ecns.2019.04.006
Johnson, E. A., Lasater, K., Hodson-Carlton, K., Siktberg, L., Sideras, S., & Dillard, N.
5026-33.3.176
https://doi.org/10.3912/OJIN.Vol26No01Man02
Kavanaugh, J., & Szweda, C. (2017). A crisis in competency: The strategic and ethical
103
Education Perspectives. 38(2), 57-62.
https://doi.org/10.1097/01.nep.0000000000000112
Kelly, M. A., Slatyer, S., Myers, H., Gower, S., Mason, J., & Lasater, K. (2022). Using
31–40. https://doi.org/10.1016/j.ecns.2022.06.003
http://hdl.handle.net/10342/6372
Kim, S.-J., Kim, S., Kang, K.-A., Oh, J., & Lee, M.-N. (2016). Development of a
caring for children with dehydration. Nurse Education Today, 37, 45–52.
https://doi.org/10.1016/j.nedt.2015.11.011
Kim, Y.-J., Noh, G.-O., & Im, Y.-S. (2017). Effect of step-based prebriefing activities on
https://doi.org/10.1016/j.ecns.2017.06.005
Kinyon, K., D’Alton, S., Poston, K., & Navarrete, S. (2021). Improving physical
https://doi.org/10.3390/nursrep11030057
104
Klenke-Borgmann, L., Cantrell, M. A., & Mariani, B. (2020). Nurse educators’ guide to
https://doi.org/10.1097/01.NEP.0000000000000669
https://doi.org/10.1016/j.ecns.2020.11.006
Korpi, H., Peltokallio, L., & Piirainen, A. (2018). Problem-based learning in professional
5015.1732
Kuiper, R., Heinrich, C., Matthias, A., Graham, M. J., & Bell-Kotwall, L. (2008).
Debriefing with the OPT Model of Clinical Reasoning during high fidelity patient
https://doi.org/10.2202/1548-923X.1466
://doi.org/10.3928/01484834-20070601-06
105
Lasater K. (2007b). Clinical judgment development: Using simulation to create an
https://doi.org/10.3928/01484834-20071101-04
Lasater, K., Nielsen, A. E., Stock, M., & Ostrogorsky, T. L. (2015). Evaluating the
http://dx.doi.org.proxy.ulib.uits.iu.edu/10.3928/00220124-20151112-09
Lavoie, P., Pepin, J., Cossette, S., & Clarke, S. P. (2019). Debriefing approaches for
https://doi.org/10.1016/j.colegn.2019.01.001
Leijser, J., & Spek, B. (2021). Level of clinical reasoning in intermediate nursing
https://doi.org/10.1016/j.nedt.2020.104641
Leslie, J., Smith, C. R., Little, M. K., Schwytzer, D. J., Goodin, J., Rota, M. C., & Glazer,
https://doi.org/10.37506/ijone.v12i4.11219
Levett-Jones, T., Hoffman, K., Dempsey, J., Jeong, S., Noble, D., Norton, C. A., Roche,
J., & Hickey, N. (2010). The ‘five rights’ of clinical reasoning: An educational
model to enhance nursing students’ ability to identify and manage clinically ‘at
106
risk’ patients. Nurse Education Today, 30(6), 515–520.
https://doi.org/10.1016/j.nedt.2009.10.020
Lioce, L. (2020). Healthcare simulation dictionary (2nd Ed.). Agency for Healthcare
Lopez, V., Anderson, J., West, S., & Cleary, M. (2022). Does the COVID-19 pandemic
further impact nursing shortages? Issues in Mental Health Nursing, 43(3), 293–
295. https://doi.org/10.1080/01612840.2021.1977875
Lujan, J., & Vasquez, R. (2010). A case study of the Scaffolding Clinical Practicum
05
MacLean, H., Janzen, K. J., & Angus, S. (2019). Lived experience in simulation: Student
perspectives of learning from two lenses. Clinical Simulation in Nursing, 31, 1–8.
https://doi.org/10.1016/j.ecns.2019.03.004
MacLeod, M., & van der Veen, J. T. (2020). Scaffolding interdisciplinary project-based
377. https://doi.org/10.1080/03043797.2019.1646210
Manz, J. A., Iverson, L. M., Hawkins, K., Tracy, M. E., Hercinger, M., & Todd, M.
Mariani, B., Cantrell, M. A., Meakim, C., Prieto, P., & Dreifuerst, K. T. (2013).
107
Clinical Simulation in Nursing, 9(5), e147-55.
https://doi.org/10.1016/j.ecns.2011.11.009
Mariai, B., Fey, M. & Gloe, D. (2018). The simulation research rubric: A pilot study
DOI: 10.1016/j.ecns.2018.06.003
http://dx.doi.org/10.1007/s11422-012-9396-0
org.proxy.ulib.uits.iu.edu/10.1016/j.ecns.2016.02.001
and strategy (know: do: teach). Clinical Simulation in Nursing, 49, 40–49.
https://doi.org/10.1016/j.ecns.2020.05.005
McDermott, D. S., Ludlow, J., Horsley, E., & Meakim, C. (2021). Healthcare Simulation
Meijerman, I., Nab, J., & Koster, A. S. (2016). Designing and implementing an inquiry-
https://doi.org/10.1016/j.cptl.2016.08.001
Mistry, V. (2011). Critical care training: Using Twitter as a teaching tool. British Journal
108
Motlhaka, H. (2020). Blackboard collaborated-based instruction in an academic writing
Najjar, R. H., Lyman, B., & Miehl, N. (2015). Nursing students’ experiences with high-
National Academies of Sciences, Engineering, and Medicine. (2018). How People Learn
https://doi.org/10.17226/24783
https://doi.org/10.1080/14623943.2018.1437400
Nguyen, M.A. (2017). Liberal education and the connection with Vygotsky’s theory of
https://doi.org/10.17759/chp.2017130108
Nichols, T., & Nichols, L. S. (2006). 2+2+2: An equation for Native American student
success. In M. B. Lee (Ed.), Ethnicity matters: Rethinking how black, Hispanic, &
Indian students prepare for & succeed in college (pp. 57–80). Peter Lnag
Publishing. https://www.ulib.iupui.edu/cgi-
bin/proxy.pl?url=https://search.ebscohost.com/login.aspx?direct=true&db=eue&
AN=39347342&site=ehost-live
109
Norman, J. (2018). Differences in learning outcomes in simulation: The observer role.
https://doi.org/10.1016/j.nepr.2017.10.025
Notarnicola, I., Petrucci, C., De Jesus Barbosa, M. R., Giorgi, F., Stievano, A., & Lancia,
Nurakhir, A., Palupi, F. N., Langeveld, C., & Nurmalia, D. (2020). Students’ views of
https://doi.org/10.14710/nmjn.v10i2.29864
https://doi.org/10.3928/01484834-20160114-01
Paige, J. B., & Morin, K. H. (2013). Simulation fidelity and cueing: A systematic review
https://doi.org/10.1016/j.ecns.2013.01.001
Palancia Esposito, C., & Sullivan, K. (2020). Maintaining clinical continuity through
110
Pallant, J. (2020). SPSS Survival Manual: A step by step guide to data analysis using
Pardue, K. T., Holt, K., Dunbar, D.-M., & Baugh, N. (2023). Exploring the Development
Perbone Nunes, J. G., Lasater, K., de Souza Oliveira-Kumakura, A. R., Garbuio, D. C.,
https://doi.org/10.5205/reuol.8200-71830-3-SM.1006sup201615
Poledna, M., Gomez-Morales, A., & Hahler, D. (2022). Nursing students’ cue recognition
10.1097/NNE.0000000000001198
Polit, D.F. & Tatano Beck, C. (2008). Nursing Research. Generating and Assessing
Postma, T. C., & White, J. G. (2015). Developing clinical reasoning in the classroom—
80. https://doi.org/10.1111/eje.12105
Powell-Laney, S., Keen, C., & Hall, K. (2012). The use of human patient simulators to
Powers, K. (2020). Bringing simulation to the classroom using an unfolding video patient
111
confidence, and perceptions of simulation design. Nurse Education Today, 86.
https://doi.org/10.1016/j.nedt.2019.104324
https://doi.org/10.28945/1546
Raman, S., Labrague, L. J., Arulappan, J., Natarajan, J., Amirtharaj, A., & Jacob, D.
https://doi.org/10.1111/nuf.12351
Reid, C. A., Ralph, J. L., El-Masri, M., & Ziefle, K. (2020). High-fidelity simulation and
https://doi.org/10.1177/0193945920907395
Reime, M. H., Johnsgaard, T., Kvam, F. I., Aarflot, M., Engeberg, J. M., Breivik, M., &
51–58. https://doi.org/10.1080/13561820.2016.1233390
112
Pedagogy and Education, 27(3), 313–326.
https://doi.org/10.1080/1475939X.2018.1447989
Rode, J. L., Callihan, M. L., & Barnes, B. L. (2016). Assessing the value of large-group
https://doi.org/10.1016/j.ecns.2016.02.012
Rodziewicz, T. L., Houseman, B., & Hipskind, J. E. (2022). Medical error reduction and
http://www.ncbi.nlm.nih.gov/books/NBK499956/
https://doi.org/10.1016/j.ecns.2022.08.001
Roh, Y. S., & Jang, K. I. (2017). Survey of factors influencing learner engagement with
simulation debriefing among nursing students. Nursing & Health Sciences, 19(4),
485–491. https://doi.org/10.1111/nhs.12371
Rotsaert, T., Panadero, E., & Schellens, T. (2018). Anonymity as an instructional scaffold
in peer assessment: Its effects on peer feedback quality and evolution in students’
113
perceptions about peer assessment skills. European Journal of Psychology of
http://hdl.handle.net/10755/601897
https://doi.org/10.3928/01484834-20200220-15
https://doi.org/10.16899/ctd.49922
Sharpnack, P. A., Goliat, L., Baker, J. R., Rogers, K., & Shockey, P. (2013). Thinking
like a nurse: Using video simulation to rehearse for professional practice. Clinical
Shin, H., Gi Park, C., & Shim, K. (2015). The Korean version of the Lasater Clinical
https://doi.org/10.1016/j.nedt.2014.06.009
3928/01484834-20140922-05
Shin, H., Sok, S., Hyun, K. S., & Kim, M. J. (2015). Competency and an active learning
591–598. https://doi.org/10.1111/jan.12564
114
Shinnick, M. A., & Cabrera-Mino, C. (2021). Predictors of nursing clinical judgment in
https://doi.org/10.1097/01.NEP.0000000000000604
it matter to literacy teachers? Journal of Adolescent & Adult Literacy, 62(3), 253–
257. https://doi.org/10.1002/jaal.756
Solli, H., Haukedal, T. A., Husebø, S. E., & Reierson, I. Å. (2020). The art of balancing:
https://doi.org/10.1186/s12912-020-00493-z
Stanley, M. J. C., & Dougherty, J.P. (2010). Nursing education model. A paradigm shift
380. https://doi.org/10.1043/1536-5026-31.6.378
Stephen, L., Kostovich, C., & O’Rourke, J. (2020). Psychological safety in simulation:
25–31. https://doi.org/10.1016/j.ecns.2020.06.010
Strickland, H. P., Cheshire, M. H., & March, A. L. (2017). Clinical judgment during
115
Tanner, C. A. (2006). Thinking like a nurse: A research-based model of clinical judgment
https://doi.org/10.3928/01484834-20060601-04
https://doi.org/10.1016/j.nepr.2012.07.007
Thiele, J. E., Baldwin, J. H., Hyde, R. S., Sloan. B., & Strandquist, G. A. (1986). An
investigation of decision theory: What are the effects of teaching cue recognition?
4834-19861001-05
Todd, M., Manz, J. A., Hawkins, K. S., Parsons, M. E., & Hercinger, M. (2008). The
https://doi.org/10.2202/1548-923X.1705
Tutticci, N., Ryan, M., Coyer, F., & Lewis, P. A. (2018). Collaborative facilitation of
debrief after high-fidelity simulation and its implications for reflective thinking:
116
student experiences. Studies in Higher Education, 43(9), 1654–1667.
https://doi.org/10.1080/03075079.2017.1281238
Victor-Chmil, J., & Larew, C. (2013). Psychometric Properties of the Lasater Clinical
45–52.
20171120-05
Victor, J., Chavez, L. S., & Podlesney, C. (2021). Do predictor exams really predict
48–54. https://doi.org/10.1016/j.ecns.2020.10.005
Victor, J., Ruppert, W., & Ballasy, S. (2017). Examining the relationships between
Vreugdenhil, J., & Spek, B. (2018). Development and validation of Dutch version of
https://doi.org/10.1016/j.nedt.2017.12.013
Wheeler, J., Dudas, K., & Brooks, G. (2021). Anxiety and a mindfulness exercise in
https://doi.org/10.1016/j.ecns.2021.05.008
117
Williams, B., French, J., & Brown, T. (2009). Can interprofessional education DVD
https://doi.org/10.1016/j.nedt.2009.02.008
Yuan, H., Williams, B., & Man, C. (2014). Nursing students’ clinical judgment in high
118
Curriculum Vitae
Emily S. McIntire
Education
2004 – 2005 BS, Nursing Science, Ferris State University, Big Rapids,
MI
PROFESSIONAL EMPLOYMENT
Academic Appointments
Clinical Appointments
08/2011 – 05/2013 Staff Nurse, Pain Clinic, Sparrow Health Systems, Lansing,
MI
08/2008 – 06/2010 Charge Nurse, Dialysis, Fresenius Medical Care, Charlotte,
MI
06/2004 – 08/2007 Staff Nurse and Charge Nurse, Critical Care Surgical
Stepdown, Sparrow Health Systems, Lansing, MI
GRANT FUNDING
Research
2023 – 2024 PhD student. Midwest Nursing Research Society: A
Simulation Pre-Brief Scaffold to Support Clinical Judgment
and Independence in Clinical Judgment Decision Making.
Funded in 2023 by Indiana University School of Nursing.
($1,500)
Liu, C.C., McIntire, E., Sender, J. &. Ling, J. (under review, July 2023). Teaching
Social Determinants of Health in BSN Programs: An Integrative Review of
Strategies and Effectiveness. Nurse Educator. (IF=2.6)
Liu, C.C., Ling, J., Liu, C., Ammigan, R., Schrader, K. & McIntire, E. (2022).
Vaccination Rates Among International Students: Insights from a University
Health Vaccination Initiative. The Journal of American College Health. doi:
10.1080/07448481.2022.2155470 (IF=2.4)
PRESENTATIONS
McIntire, E., Sender, J. & Poindexter, K. (2018, May). Faculty and Student
Conceptions on Student-Centered Learning: A Quality Initiative Project.
Michigan State University Spring Conference for Teaching and learning,
East Lasing, MI.
INVITED PRESENTATIONS
McIntire, E. & West, P. (2013, July). Online Learning: Strategies and Skills for
Success. Michigan State University College of Nursing, East Lansing, MI.
TEACHING
2013-2023 NUR 205, 323, 337, 371, 434, 438, 471: Several courses;
Instructor, Taught alone and co-taught with TA’s;
Clinical/Lab
OTHER