Download as pdf or txt
Download as pdf or txt
You are on page 1of 136

A SIMULATION PRE-BRIEF SCAFFOLD TO SUPPORT CLINICAL JUDGMENT

AND INDEPENDENCE IN CLINICAL JUDGMENT DECISION MAKING

Emily S. McIntire

Submitted to the faculty of the University Graduate School


in partial fulfillment of the requirements
for the degree
Doctor of Philosophy
in the School of Nursing,
Indiana University

January 2024
Accepted by the Graduate Faculty of Indiana University, in partial
fulfillment of the requirements for the degree of Doctor of Philosophy.

Doctoral Committee

______________________________________
Barbara Manz Friesth, PhD, RN, Chair

______________________________________
Susan Hendricks, Ed.D, RN, CNE, ANEF

November 14, 2023

______________________________________
Deanna Reising, PhD, RN, ACNS-BC, FAAN, FNAP, ANEF

______________________________________
Joshua Danish, PhD

ii
© 2024

Emily S. McIntire

iii
ACKNOWLEDGEMENT

First and foremost, I would like to acknowledge my dissertation committee and

chair – Dr. Hendricks, Dr. Reising, Dr. Danish and Dr. Manz Friesth. Your thoughtful

questions and expert input helped me grow intellectually and emotionally, and I am

profoundly grateful for your time, engagement, and support! Special, heartfelt thanks to

Dr. Manz Friesth. As my committee chair, I am so appreciative for your mentorship

through this journey! We have shared some challenging times – we made it through this

during a pandemic, and we’ve both experienced great losses. I cannot thank you enough

for staying by my side in this process, guiding me with patience and pushing me with

firm (and yet still gentle) kindness. Your support, expertise, collaboration, time, attention,

and encouragement are just a few words I could use to express what you have done to

help me through this process, but they hardly capture the depth of my gratitude and

genuine appreciation. Thank you a million times over! I could not have done this without

you!

I would also like to acknowledge all my colleagues who supported me, inspired

me, and reminded me that I could do it! Special thanks to Andy Greger – Andy, thank

you for helping me when I faced tech problems (even on the weekends) and educating me

on different technology! Thank you for sharing excitement of learning with simulation,

and helping make things happen! And perhaps most of all, thank you for our “therapy

sessions,” and of course, our friendship. I also want to acknowledge our simulation lab

team: Anna, Alexis, Stephanie, Trevor, Jeremy, and Callie. You were the best team ever!

I absolutely value your part in helping me conduct my study, but even more, I value you

each as peers, colleagues, and friends. You are the hardest working and most amazing

iv
group! I have learned from each of you and am in awe of each of your individual

strengths. Thank you, thank you, thank you for being a part of my life! And for riding this

journey with me!

Finally, my family: Tom, Claire and Valerie. This was a family commitment, and

I just cannot describe how much your support has meant to me. From taking over dinner

and dishes, to making me study break snacks, to providing me with encouraging “I

believe in you!” or “You can do it!” sentiments, your part in this is immeasurable. Girls –

thank you for being so patient through all of this, allowing me to still be your mom while

investing so much time in my education. Tom – thank you for being there, for stepping

in, and for taking over when I just couldn’t handle one, more, thing! Thank you for the

breaks and coordinating vacations. Thank you for listening to my frustrations. Thank you

for everything you did and were during these past five years! We did it! And I am

looking forward to our next adventure (just no more school! I promise, for real this

time!)!

v
Emily S. McIntire

A SIMULATION PRE-BRIEF SCAFFOLD TO SUPPORT CLINICAL JUDGMENT

AND INDEPENDENCE IN CLINICAL JUDGMENT DECISION MAKING

It is essential that nurses independently assume patient care, yet new nurses lack

necessary clinical judgment skills. The purpose of this study was to examine a simulation

pre-brief scaffold to support nursing students’ clinical judgment development and clinical

judgment independence.

The pre-brief experiential learning scaffold for clinical judgment independence

(PELS-CJI) framework informed simulation pre-brief in this experimental study. A

convenience sample included traditional and accelerated Bachelor of Science in nursing

students in their senior year. Participants were randomly assigned to complete a

simulation pre-brief with or without the Interactive-Video Recorded Simulation (I-VRS).

Nursing student’s total clinical judgment and individual components of clinical

judgment (noticing, interpreting, and responding) in simulation were measured by a

single evaluator blinded to condition using the Lasater clinical judgment rubric (LCJR)

(Cronbach’s alpha .932). To measure clinical judgment independence, the number of

unintended conceptual cues during simulation were counted. Participants in the

intervention group had higher clinical judgment scores during simulation (n = 31, M =

28.45, SD = 5.163) as compared to the control group (n = 36, M = 25.06, SD = 5.275),

t(65) = -2.653, p < .01. A significant relationship for the noticing and responding

subscales of clinical judgment was observed between groups, but not for the interpreting

subscale. No significant difference in the number of unintended cues was found between

groups.

vi
Results support that using an I-VRS in simulation pre-brief enhanced clinical

judgment in simulation. The use of the I-VRS adds to the existing limited evidence

related to simulation pre-brief to support clinical judgment development among

undergraduate nursing students. Future research using an I-VRS during pre-brief is

necessary to determine if improvement in clinical judgment is retained and transferrable

to the clinical setting. Additional testing of the PELS-CJI to guide simulation pre-brief is

encouraged.

Barbara Manz Friesth, PhD, RN, Chair

Susan Hendricks, Ed.D, RN, CNE, ANEF

Deanna Reising, PhD, RN, ACNS-BC, FAAN, FNAP, ANEF

Joshua Danish, PhD

vii
TABLE OF CONTENTS

List of Tables .......................................................................................................................x


List of Figures .................................................................................................................... xi
List of Abbreviations ........................................................................................................ xii
Chapter I: Introduction and Background .............................................................................1
Simulation to Develop Clinical Judgment Skills ...........................................................3
Problem Statement .........................................................................................................6
Purpose...........................................................................................................................6
Research Questions ........................................................................................................7
Definition of Terms........................................................................................................7
Theoretical Underpinnings for Learning Clinical Reasoning and Developing
Clinical Judgment with Simulation..............................................................................10
Kolb’s Experiential Learning Theory (KELT) ......................................................11
NLN Jeffries’s Simulation Theory.........................................................................11
Tanner’s Clinical Judgment Model ........................................................................12
Vygotsky: Sociocultural Theory, Zone of Proximal Development, and
Scaffolding .............................................................................................................13
Conceptual Framework Informing this Study..............................................................15
Significance..................................................................................................................17
Organization of this Research Dissertation..................................................................18
Chapter II: Literature Review on Learning, Teaching, and Assessing Clinical
Judgment ...........................................................................................................................19
Learning Clinical Judgment from Constructivists and Sociocultural Perspectives .....20
Teaching Clinical Reasoning and Clinical Judgment ..................................................22
Active Learning to Support Clinical Judgment Development ...............................24
Assessing Clinical Reasoning and Clinical Judgment in Simulation ..........................27
Creighton Competency Evaluation Instrument ......................................................29
Tanner’s Clinical Judgment Model and the Lasater Clinical Judgment Rubric ....30
LCJR versus CCEI ................................................................................................35
Scaffolds ......................................................................................................................36
Scaffolds in Nursing Education .............................................................................38
Summary ......................................................................................................................49
Chapter III: Methods ..........................................................................................................51
Design ..........................................................................................................................51
Setting ..........................................................................................................................52
Selection of Participants ..............................................................................................52
Inclusion/Exclusion Criteria ..................................................................................52
Protection of Human Subjects .....................................................................................52
Recruitment Procedures .........................................................................................53
Instruments ...................................................................................................................54
The I-VRS ..............................................................................................................56
Research Questions and Nulls .....................................................................................57
Data Collection ............................................................................................................58
Intervention Distribution and Pre-Briefing Activities ...........................................58
Simulation Scenarios .............................................................................................59

viii
Data Analysis ...............................................................................................................61
Demographic Data .................................................................................................61
Descriptive Statistics ..............................................................................................61
Statistical Test – Research Question 1 ...................................................................61
Statistical Test – Research Question 2 ...................................................................62
Statistical Test – Research Question 3 ...................................................................63
Power Calculations and Effect Size .......................................................................63
Limitations ...................................................................................................................64
Summary ......................................................................................................................65
Chapter IV: Results ............................................................................................................66
Demographic Data .......................................................................................................66
Reliability of the LCJR ................................................................................................68
Results of Research Questions .....................................................................................69
Research Question 1 ..............................................................................................69
Research Question 2 ..............................................................................................70
Research Question 3 ..............................................................................................70
Summary of Key Findings ...........................................................................................71
Chapter V: Discussion and Recommendations ..................................................................73
Review of Current Study .............................................................................................73
Problem ..................................................................................................................73
Purpose...................................................................................................................74
Research Questions and Findings ..........................................................................74
Discussion of Findings .................................................................................................75
Effect of the I-VRS Scaffold on Total Clinical Judgment .....................................76
Effect of the I-VRS Scaffold on Individual Components of Clinical Judgment ...76
Effect of the I-VRS on Cueing and Clinical Judgment Independence ..................78
PELS-CJI and the I-VRS Scaffold .........................................................................79
Time to Develop Clinical Judgment ......................................................................81
The LCJR to Measure Clinical Judgment ..............................................................82
Implications..................................................................................................................83
Limitations ...................................................................................................................84
Recommendations ........................................................................................................84
Conclusion ...................................................................................................................85
Appendices .........................................................................................................................87
Appendix A Lasater Clinical Judgment Rubric ...........................................................87
Appendix B Lasater Clinical Judgment Scoring Sheet ................................................89
References ..........................................................................................................................90
Curriculum Vitae

ix
LIST OF TABLES

Table 1: Demographic Distribution of Study Participants ................................................67


Table 2: Mean Total Clinical Judgment by Case and Group .............................................67
Table 3: Total Clinical Judgment .......................................................................................68
Table 4: Variable Impact on Unintended Conceptual Cues...............................................71
Table 5: Additional Variable Impact on Unintended Conceptual Cues ............................71

x
LIST OF FIGURES

Figure 1: Tanner’s Clinical Judgment Model ....................................................................13


Figure 2: Pre-brief Experiential Learning Scaffold for Clinical Judgment
Independence (PELS-CJI) .................................................................................................17

xi
LIST OF ABBREVIATIONS

AACN American Association of Colleges of Nursing

ABSN Accelerated Bachelor of Science in nursing

AL Active learning

CCEI Creighton Competency Evaluation Instrument

CCNE Commission on Collegiate Nursing Education

CJM Clinical judgment model

C-SEI Creighton Simulation Evaluation Instrument

INACSL International Nursing Association of Clinical Simulation Learning

IRB Institutional review board

I-VRS Interactive video recorded simulation

KELT Kolb’s experiential learning theory

LCJR Lasater clinical judgment rubric

LCJRSS Lasater Clinical Judgment Rubric Scoring Sheet

NCSBN National Council of State Boards of Nursing

PELS-CJI Pre-brief Experiential Learning Scaffold for Clinical Judgment

Independence

SPs Standardized patients

TBSN Traditional Bachelor of Science in nursing

UCC Unintended conceptual cues

VRS Video recorded simulation

ZPD Zone of proximal development

xii
Chapter I: Introduction and Background

Over 10 years ago, the Institute of Medicine released a landmark report on the

future of nursing, calling for re-envisioned health care models emphasizing the need for

nurses to obtain and maintain advanced skills and knowledge to provide safe patient-

centered care. As nurses are key contributors to the safety of patients and positive patient

outcomes, it is necessary to take steps to enhance patient safety and prevent adverse

events and medical errors, which lead to as many as 100,000 patient deaths every year

(Rodziewicz et al., 2022). Unfortunately the nursing workforce struggles with an

abundant shortage of experienced registered nurses, with over one million nurses, nearly

one-third of the nursing workforce, at or quickly approaching retirement (Haddad et al.,

2022). Goodare (2017) recognized the alarming trend of new graduate nurses leaving the

nursing workforce soon after graduation due to the stress of the role, and current turnover

rates due to burnout are estimated as high as 37% in some areas of the United States

(Haddad et al., 2022). Alarmingly, the level of fatigue and mental health suffering

resulting from the COVID-19 pandemic is certain to negatively impact nursing retention

rates even more (Lopez et al., 2022). As the future of the nursing workforce seems grim,

it is especially disturbing that thousands of qualified students are turned away from

schools of nursing every year due to a parallel nursing faculty shortage, coupled with

insufficient clinical placement opportunities (American Association of Colleges of

Nursing [AACN], 2022).

Despite the shortage of nursing faculty, current nurse educators must still prepare

nursing students to be clinically competent upon graduation to support patient care

practices where adverse events are avoided (AACN, 2022). Clinical competency in

1
nursing practice involves advanced skillsets and problem-solving abilities, including

clinical judgment. Clinical judgment in nursing is “the observed outcome of critical

thinking and decision-making… an iterative process that uses nursing knowledge to

observe and assess presenting situations, identify a prioritized client concern, and

generate the best possible evidence-based solutions in order to deliver safe client care”

(Dickison et al., 2019, p. 2).

Clinical judgment is not to be confused with critical thinking or clinical

reasoning. Critical thinking involves the processes of clinical reasoning and clinical

judgment. While critical thinking can occur during clinical situations, critical thinking is

often reflected outside of the patient encounter and involves more broad clinical issues

(i.e., system failures, communication breakdown, team collaboration, etc.) with a more

general impact on one’s nursing practice (Benner et al., 2010). Clinical reasoning is

narrower in scope than critical thinking and is the actual thought processes that lead

patient care decisions at the point of care. During the clinical reasoning process, the

nurse is gathering, analyzing and evaluating patient and environmental information to

allow them to make informed and safe decisions. The nurse's decisions and how they

respond to the patient’s needs based on the clinical situation are known as clinical

judgments.

With patient safety in mind, health system administrators have argued that new

graduate nurses lack skills necessary for good clinical judgment, and recent data suggests

a mere 9% are ready for practice (Kavanagh & Sharpnack, 2021). New nurse graduates

are not exemplifying the skills (including clinical judgment) to meet the complex

demands of nursing practice required to provide safe patient care (Del Bueno, 2005;

2
Victor et al., 2021). With nurse turnover at an all-time high, it is essential that new

nurses are ready to independently assume patient care, yet new nurses’ preparedness in

competency and readiness for practice is lacking. Nursing students need additional

learning opportunities to enhance their clinical judgment. One of the most common

strategies that nurse educators use to support clinical reasoning and clinical judgment

development is simulation and subsequent debriefing (Lasater et al., 2015) .

Simulation to Develop Clinical Judgment Skills

To provide nursing students learning opportunities to help prepare them for

practice, active learning (AL) instructional methods have been explored to promote

application of content and learner engagement (Candela et al., 2006; Drake, 2012;

Stanley & Dougherty, 2010). Perhaps the leading AL strategy in nursing education is

simulation (Hanshaw & Dickerson, 2020). Simulation in healthcare involves three main

components including pre-briefing, the simulation experience, and debriefing. While

pre-briefing involves “An information or orientation session held prior to the start of a

simulation activity in which instructions or preparatory information is given to the

participants,” the simulation experience (intra-simulation) provides learners the

opportunity to experience events that mimic a real-life situation, and post simulation

debriefing provides learners the opportunity to assimilate learned constructs to future

clinical events (Lioce, 2020, p. 37). Simulation in nursing education supports learners in

deepening their understanding to better execute and apply psychomotor and cognitive

skills, including clinical reasoning, to inform clinical judgments and meet competency

demands of nursing practice (Klenke-Borgmann et al., 2020). In clinical settings

impacted by the nursing shortage and ever-increasing complexity of patient care,

3
simulation experiences offer an effective solution to prepare nursing students for

independent practice.

Despite the use of simulation as an educational strategy, new nurses continue to

exhibit underdeveloped competency in clinical practice. Though simulation allows

nursing students to practice applying clinical reasoning to make clinical judgments,

application and transfer of complex skills require assisted development (Belland, 2014).

Appropriate instructional scaffolds (temporary supported action in learning to develop

deep level, complete understanding and advancement in learning processes to facilitate

conceptual application and transfer) are applied in simulation experiences to help nursing

students develop clinical reasoning and clinical judgment (Horton, 2008).

Scaffolds are integrated pre-simulation (i.e., pre-briefing), intra-simulation (i.e.,

cueing) and post-simulation (i.e., debriefing). Debriefing is heavily studied and reported

in the literature, and simulation standards recognize the need for structured debriefing

following a simulation activity. If appropriately facilitated, debriefing can promote

critical thinking, clinical reasoning, and clinical judgment skills (Dreifuerst, 2009). Pre-

briefing has become an area of interest within simulation research in recent years, and

current literature is reflective of student perceptions related to pre-briefing efficacy. Pre-

briefing activities help the learner to begin problem-solving regarding patient care

(McDermott, 2020). If conducted by an expert facilitator, pre-briefing may even help

inexperienced learners better gather pertinent patient information to inform patient care

(McDermott, 2016).

Cueing is another known concept in simulation. Cues in simulation can be in the

form of reality cues or conceptual cues. Reality cues offer information “…to help the

4
learner interpret or clarify simulated reality through information delivered during the

simulation” (Lioce, 2020, p. 12). Conceptual cues differ from reality cues and involve

“Information provided to help the learner reach instructional objectives through

programmable equipment, the environment, or through responses from the simulated

patient or role player” (Lioce, 2020, p. 12). Simulation operators provide reality cues to

help clarify learner misconceptions brought forth by the simulated environment (i.e., the

simulation operator might say [speaking as the manikin] “it’s only 9:00” if the student

looks at their watch, when it is, in fact, 2:30). Conceptual cueing, on the other hand, is

done to provide the learner with assistance to help them meet the simulation objectives

(i.e., if the student does not look at their watch, but the time is critical to determine if a

medication is appropriate to give, the simulation operator might say [speaking as the

manikin] “what time is it now?”). Conceptual cueing, then, is like a hint to help the

learner along in the simulation scenario, and is acknowledged as a form of instructional

support (Paige, 2016). Both reality and conceptual cues should be planned and

purposefully integrated into simulation (Jeffries, 2015). Preliminary literature suggests

unintended cues (cues that are not planned or that are not normally present and may

misrepresent a real situation) can negatively influence learning (Biggs et al., 2021;

Jeffries 2015; Paige & Morin, 2013). Learners may become dependent on unintended

cues and unintended cues can cause team communication interruptions. Such disruption

may impede learner independence. To prevent disrupting the learning processes, all

scaffolds used in simulation (such as debriefing, cues and pre-briefing) must be

thoughtfully considered and tested for efficacy (Jeffries, 2015).

5
Problem Statement

Considering the need to better prepare undergraduate nursing students for the

complexity of practice today, more information is needed on practices to improve

students’ ability to make good clinical judgments, and to do so independently. Research

involving the debriefing scaffold is prominent, and debriefing has demonstrated an ability

to improve clinical judgment (Dreifuerst, 2009). However, there is a paucity of literature

that expands beyond student perceptions pertaining to pre-briefing in simulation.

Research of simulation pre-brief scaffolds to support nursing students’ clinical reasoning

skill development and independent clinical judgment is understudied. A gap in literature

exists involving best practices in scaffold techniques before simulation to best support the

student during simulation. More specifically, there is limited literature on scaffolds pre-

simulation or pre-brief, which demonstrate improvements in clinical judgment.

Purpose

The purpose of this study is to examine the use of a pre-brief scaffold to support

undergraduate nursing students’ clinical judgment development and clinical judgment

independence. Using an experimental design, participants in this study completed a pre-

brief with or without a researcher developed scaffold, the Interactive-Video Recorded

Simulation (I-VRS). Following the pre-brief, participants engaged in a simulation

involving a high-fidelity manikin. Undergraduate nursing student’s clinical judgment in

simulation was measured with the Lasater clinical judgment rubric (LCJR). To measure

clinical judgment independence, the number of unintended conceptual cues during

simulation were counted.

6
Research Questions

The research questions of this study were:

1) Is there a difference in total clinical judgment among prelicensure nursing

students engaged in simulation when differentiated by use of a scaffold during

pre-brief or no scaffold?

2) Is there a difference in the clinical judgment components of noticing, interpreting,

and responding among prelicensure nursing students engaged in simulation when

differentiated by use of a scaffold during pre-brief or no scaffold?

3) Is there a difference in the number of unintended conceptual cues provided during

simulation when differentiated by use of a scaffold or no scaffold during pre-

brief?

Definition of Terms

To facilitate understanding of this dissertation study, a list of definitions of key

terms is presented.

Clinical Judgment (Conceptual Definition) - Involving clinical reasoning processes,

clinical judgment is “the observed outcome of critical thinking and decision-making… an

iterative process that uses nursing knowledge to observe and assess presenting situations,

identify a prioritized client concern, and generate the best possible evidence-based

solutions in order to deliver safe client care” (Dickison, Haerling & Lasater, 2019, p. 2).

Clinical Judgment (Operational Definition) - Total score for clinical judgment and sub

scores for noticing, interpreting, and responding on the LCJR. See also Lasater Clinical

Judgment Rubric in definitions.

7
Clinical Reasoning - The actual thought processes of gathering, analyzing and evaluating

patient and environmental information that lead the nurse to make informed patient care

decisions (Benner et al., 2010).

Conceptual cues (in simulation) - “Information provided to help the learner reach

instructional objectives through programmable equipment, the environment, or through

responses from the simulated patient or role player” (Lioce, 2020, p. 12). Some

conceptual cues in simulation replicate conceptual cues that are naturally present in an

actual clinical setting or situation. Other conceptual cues in simulation involve

exaggerated clinical presentation progression or patient comments (by the simulation

operator) to draw attention to the patient condition and offer the learner assistance to help

them meet the simulation objectives.

Critical Thinking - reasonable, skillful and reflective thinking that informs decisions. In

nursing practice, critical thinking involves the processes of clinical reasoning and clinical

judgment (Benner et al., 2010).

Debrief - “A formal, collaborative, reflective process within the simulation learning

activity” (Lioce, 2020, p. 13) that “promotes understanding and supports transfer of

knowledge, skills and attitudes with a focus on best practices to promote safe, quality

patient care” (INACSL Standards Committee, 2016, p. S21). Debriefing promotes

critical thinking, clinical reasoning, and clinical judgment development (Dreifuerst,

2009).

Lasater Clinical Judgment Rubric (LCJR) - A rubric that measures four components of

clinical judgment (noticing, interpreting, responding, and reflecting) with 11 total

8
correlating descriptors (each with associated developmental levels: beginning,

developing, accomplished and exemplary).

Pre-brief - “An information or orientation session held prior to the start of a simulation

activity in which instructions or preparatory information is given to the participants”

(Lioce, 2020, p. 37). Effective pre-briefing allows simulation participants to gather

pertinent patient information to inform patient care (McDermott, 2020).

Reality cues (in simulation) - Information provided “…to help the learner interpret or

clarify simulated reality through information delivered during the simulation” (Lioce,

2020, p. 12). Reality cues clarify learner misconceptions brought forth by the simulated

environment.

Scaffold (Conceptual Definition) - Temporary supported action in learning to develop

deep level, complete understanding and advancement in learning processes to facilitate

conceptual application and transfer (Horton, 2008).

Scaffold (Operational Definition) - Pre-brief with an interactive video recorded

simulation (I-VRS) of an expert role model in a parallel simulation. Learner responses

(involving clinical judgment identification, comparison, and exploration) are prompted

with components of the LCJR via an interactive interface.

Simulation - “A technique that creates a situation or environment to allow persons to

experience a representation of a real event for the purpose of practice, learning,

evaluation, testing, or to gain understanding of systems or human actions” (Lioce, 2020,

p. 44).

Unintended conceptual cues (in simulation) - Cues that are not planned or that are not

normally present and may misrepresent a real situation.

9
Unintended conceptual cues (in simulation) (Operational Definition) - A count of the

number of cues that are not planned and purposefully integrated into the simulation

scenario(s), including exaggerated clinical presentation progression or patient comments

(by the simulation operator) to draw attention to the patient condition or otherwise allow

the learner to progress in the scenario.

Zone of Proximal Development (ZPD) - The difference between where a learner is in

their ability to accomplish tasks/develop skills (on their own) and the potential learning

that could occur (with appropriate assistance).

Theoretical Underpinnings for Learning Clinical Reasoning and Developing

Clinical Judgment with Simulation

To enhance simulation experiences and better support learning clinical reasoning

and clinical judgment development in nursing education, an exploration of learning,

simulation, and sociocultural theories were explored to inform this research. Simulation

is largely rooted in Kolb’s experiential learning theory (KELT) and the NLN Jeffries’s

simulation theory. Theoretical underpinnings of this study are also informed by Tanner’s

model of clinical judgment and Vygotsky’s sociocultural theory. When explored

collectively, key elements from each of these theories were interwoven to create a

framework, the Pre-brief Experiential Learning Scaffold for Clinical Judgment

Independence (PELS-CJI) that informs this dissertation study. Kolb’s experiential

learning theory, NLN Jeffries’s simulation theory, Tanner’s clinical judgment model, and

Vygotsky’s sociocultural theory are discussed. A new framework, the PELS-CJI is

presented.

10
Kolb’s Experiential Learning Theory (KELT)

Simulation is largely rooted in KELT. During the experiential learning cycle, the

role of the learner’s experience is emphasized in knowledge development which happens

during the learning cycle and continues through future experiences (Kolb, 1984). Kolb

identified learning as knowledge obtained through concrete experiences, involving a

cyclical process of four stages: the concreate experience, reflective observation, abstract

conceptualization, and active experimentation. Kolb’s experiential cycle provides a

framework for simulation delivery, where the simulation is the concrete experience, and

post simulation debriefing is the reflective observation. Abstract conceptualization can

occur in the debriefing process, where learners explore their simulation experience and

how they might assimilate their actions into different clinical scenarios (Dreifuerst,

2009). The learning cycle continues with active experimentation which occurs during

debriefing (after the student has participated in the simulation) or in future clinical events.

During simulation events, this cyclical learning pattern allows nursing students to engage

in psychomotor development (i.e., nursing skills) and supports cognitive development

(i.e., clinical reasoning) (Chmil et al., 2015).

NLN Jeffries’s Simulation Theory

A theoretical framework offering support more specific to simulation design and

facilitation is the NLN Jeffries (2015) simulation theory. The NLN Jeffries simulation

theory provides precise guidelines for experiential simulation activities. The theory

embraces critical background considerations, design factors, and outcome measures in

simulation development and execution. Background considerations, such as the

overarching goal(s) of the simulation, influence the simulation design which involves

11
setting specific learning objectives, fidelity, roles, scenario progression and pre/post

simulation activities. The background and design lead to the simulation experiences,

which is experiential, interactive, collaborative and learner centered.

While the NLN Jeffries’s simulation theory recognizes the background, design

and simulation experience as important components of a successful simulation, the

facilitator’s role and other educational strategies for simulation implementation are

primarily highlighted during and after the simulation take place. A successful simulation

is heavily supported by the learner/facilitator relationship, and the facilitator is expected

to “respond to emerging participant needs during the simulation experience by adjusting

educational strategies… and providing appropriate feedback in the form of cues (during)

and debriefing [after] the simulation experience” (Jeffries, 2015, p. 292). When these

components (background, design and simulation experience) are thoughtfully executed in

simulation it leads to successful outcomes.

Tanner’s Clinical Judgment Model

The clinical reasoning process that leads to clinical judgments involves four main

concepts, noticing, interpreting, responding, and reflecting, which are described by

Tanner’s (2006) model of clinical judgment in nursing (CJM), as shown in Figure 1. The

concept of noticing is the nurse’s basic assessment of the situation, while interpreting

involves a more in-depth understanding. Responding is completing an intervention or

deciding no intervention is required and reflecting entails evaluating the nursing action

and outcomes that followed.

To operationalize Tanner’s CJM, Lasater (2007b) developed the LCJR (see

Appendix A) that expands on the four components of clinical judgment (noticing,

12
interpreting, responding, and reflecting) with 11 total correlating descriptors (each with

associated developmental levels: beginning, developing, accomplished and exemplary).

The LCJR can be used to support student learning and to evaluate clinical judgment in

simulation and clinical practice. Research involving the use of the CJM and the LCJR is

explored in detail in Chapter 2.

Figure 1

Tanner’s Clinical Judgment Model

Note. This model was produced by Christine Tanner, depicting the process of clinical

judgment. From “Thinking Like a Nurse: A Research-Based model of Clinical Judgment

in Nursing,” by C.A. Tanner, Journal of Nursing Education, 45(6), p. 207.

Vygotsky: Sociocultural Theory, Zone of Proximal Development, and Scaffolds

Simulation learning is also heavily influenced by the sociocultural theoretical

concepts presented by educational psychologist Vygotsky. Vygotsky (1978) focused on

13
the understanding of and strategies for encouraging developmental growth from simple to

complex processes, acknowledging that environmental signs and resources aid the learner

in problem solving. According to Vygotsky (1978), learning begins within the learner in

accordance with their prior experiences. Vygotsky recognized that learners may become

‘stuck’ in progressing with learning if current beliefs or understandings conflict with the

presenting situation. Clapper (2010) refers to such beliefs/understandings as “frames of

reference or ways of knowing” (p. 149).

The ZPD is one of Vygotsky’s most referenced areas of work in learning and is

considered the difference between where a learner is in their ability to accomplish

tasks/develop skills (on their own) and the potential learning that could occur (with

appropriate assistance). A facilitator or resource may be necessary to guide the learner

through the ZPD to achieve behavior change and learning development. Viewing

simulation learning through this theoretical lens, the simulation environment, patient

assessment data, medical devices and clinical scenario allow the student to manage

patient care through the experiential process, but learners’ current understandings or

knowledge may impede cognitive growth. The nursing student may require assistance to

achieve behavior or developmental change, such as enhanced clinical reasoning.

Instructional scaffolds to assist the nursing student in moving through the ZPD, then, may

support advancement in clinical judgment development.

Instructional Scaffolds

While not coined by Vygotsky himself, instructional scaffolding is highly

embedded within Vygotsky’s sociocultural theory. Instructional scaffolding is often

broadly considered as support with learning (Belland, 2014). Such support can be

14
accomplished through a variety of modes (including working with a more experienced

person or peer) or by using some sort of tool (computer or document) that would aid the

learner in behavioral modification, cognitive growth, or developmental change.

Ultimately, appropriately designed and delivered scaffolds support the learner in

advancing in their learning process to their full potential and are decreased as the student

moves through their ZPD.

Applying Vygotsky’s ZPD to nursing education simulation activities, scaffolds

can support nursing student growth and development in adopting advanced skillsets, such

as clinical reasoning and clinical judgment (Dickison et al., 2019). Cueing is a scaffold

in and of itself when done intra-simulation, but cues during simulation may interrupt

learning and prevent the learner from moving through the ZPD toward clinical reasoning

and clinical judgment development if they are not intentionally executed. However,

while cueing is a scaffold, other scaffolds can be developed to help learners recognize

cues (Burbach & Thompson, 2014; Thiele et al., 1986). Scaffolds in pre-briefing to help

the learner recognize intentional cues (cues that would be present in clinical practice)

during the simulation, for example, would support movement through the ZPD toward

clinical reasoning and exemplary clinical judgment development during simulation.

Conceptual Framework Informing this Study

Building off the work of Kolb, Jeffries, Tanner, and Vygotsky, the PELS-CJI (see

Figure 2) was created to inform simulation pre-briefing. The PELS-CJI considers the

background and design components of simulation to help the learner recognize (or notice)

appropriate cues to allow the student nurse to independently apply safe clinical judgments

during active experimentation (participating in the simulation). To help students think as

15
nurses and gain independence in building clinical judgment, a proposed pre-brief scaffold

is an I-VRS of an expert role model in a parallel simulation. The I-VRS involves

PlayPosit, an interface which offers interactive video presentation. The interactions

presented in the video prompt learner responses using components of the LCJR,

involving clinical judgment identification, comparison, and exploration. The engagement

with an I-VRS provides a concrete experience where the student can notice and interpret

clinical findings. It encourages written clinical judgment identification and provides

reflective observation with continued interpretation and responding. It also fosters

comparison and exploration with an expert role model and provides active

conceptualization with reflection. Active experimentation occurs in the participation of

the subsequent simulation, where the learner begins the experiential learning cycle a

second time.

Traditionally in simulation the experiential learning cycle is only completed one

time. Using the PELS-CJI, the learner can complete the experiential learning cycle twice

during the entire simulation learning activity. The PELS-CJI, then, engages learners in

repetition of similar clinical practice. Such repetition may help develop learned concepts

that can be more readily accessed for future use (National Academies of Sciences,

Engineering, and Medicine, 2018). Defined by components of KELT, Jeffries’s NLN

simulation theory, Tanner’s CJM, and Vygotsky’s sociocultural theory, the PELS-CJI

framework informs simulation pre-brief design. The PELS-CJI allows for repetitive

cognitive processing, retrieval of information, synthesis, and application of new

information to support clinical reasoning and clinical judgment development. It was

hypothesized that the I-VRS would support the learner in their ZPD (pertaining to clinical

16
reasoning skills and clinical judgment) and lead to increased clinical judgment and

increased clinical judgment independence (in a subsequent parallel simulation, as

demonstrated by increased clinical judgment scores on the LCJR and less use of

unintended conceptual cues from the simulation operator).

Figure 2

Pre-brief Experiential Learning Scaffold for Clinical Judgment Independence (PELS-

CJI)

Significance

To ensure nursing graduates are prepared with clinical reasoning skills to make

appropriate, safe, and independent clinical judgments, more research addressing

strategies that will increase independence in clinical judgment is necessary. Scientifically

supported evidence to guide nurse educators on best scaffolding practices during pre-

17
brief to support the learner in the experiential component during sim is required. This

study will provide nurse faculty information regarding the effect of a pre-brief scaffold

on nursing student clinical judgment to influence simulation pre-brief practices.

Organization of this Research Dissertation

This dissertation is presented in five chapters. Chapter I included the introduction

and background, problem, purpose, research questions, significance, definition of terms,

and theoretical framework of this dissertation study. Chapter II of this research

dissertation will expand on the theoretical foundations of KELT, NLN Jeffries’s

simulation theory, Tanner’s clinical judgment model, and Vygotsky’s sociocultural

theory and how they influence simulation. Learning, teaching, and assessing clinical

judgment in nursing education is explored. A synthesis of literature related to the

practices of scaffolding in simulation in undergraduate nursing education and the effect

of simulation on clinical judgment is presented. Chapter III includes the methodology,

inclusion and exclusion criteria, sample selection, instrumentation, data collection and

data analyses used. Chapter IV presents the findings including demographic and data

analyses for the research questions, and chapter V provides a summary of the study,

discussion of the findings, and implications for simulation pre-briefing in nursing

education.

18
Chapter II: Literature Review on Learning, Teaching, and Assessing Clinical

Judgment

This review provides an exploration of pertinent literature related to learning,

teaching, and assessing clinical judgment in nursing education from multiple theoretical

lenses. Additionally, a review of the literature related to the practices of scaffold use in

education and simulation in undergraduate nursing education is explored. A synthesis of

literature related to the practices of scaffolding in simulation in undergraduate nursing

education and the effect on clinical judgment is presented.

This literature review was developed through multiple searches for key terms

using the CINAHL, ERIC and Education Source databases, with no limitations on

publication dates. Key terms included Simulat* AND (BSN OR Baccalaureate OR

Bachelor’s) AND (Clinical Judgment) AND (Evaluate OR Assess*). Additional sources

were obtained with the searches involving Simulat* AND (BSN OR Baccalaureate OR

Bachelor’s) AND Video; Simulate* AND (Lasater Clinical Judgment Rubric);

Scaffolding AND (BSN OR Baccalaureate OR Bachelor’s); Scaffolding AND Nursing;

Scaffolding AND Simulat*; Scaffolding AND “Zone of Proximal Development.”

Further sources were obtained to include information of the history of learning

philosophies amongst disciplines both in and outside of nursing. A search of the

CINHAL, ERIC and Education Source database was conducted, with no restrictions on

dates of publications, using key words such as Learning Theory AND (Education),

Knowing AND (Learning OR Assess*). All literature sources extracted for this review

were published in peer-reviewed journals or books.

19
Learning Clinical Judgment from Constructivists and Sociocultural Perspectives

Traditional nursing education follows a didactic empiricist approach where the

professor stands in front of a large group of students in a lecture hall setting and imparts

their knowledge unto them. These traditional learning experiences originated with

cognitive theoretical foundations in mind, providing opportunities for learners to develop,

store, alter, and use the information gained (Danish & Gresalfi, 2018). Over time, nurse

educators have attempted to shift instructional approaches to better support complex

cognitive nursing skill development (i.e., clinical judgment), but current nursing

education andragogy has been slow to adapt constructivist and sociocultural

perspectives.

A constructivist perspective acknowledges that learning is constructed through

processes where the learner works to make sense (Fosnot & Perry, 2005). It is an

iterative process where exploration, error making, reflection, and community discussions

drive the development of learned concepts. Through a constructivist theoretical lens,

nursing students learn clinical judgment through clinical practice and reflection. Since

clinical judgment is the result of clinical reasoning where the nurse assesses, interprets,

and organizes external data with internal knowledge, simulation with debriefing is an

experiential constructive learning approach that provides an authentic learning

environment to support preparation for clinical practice (Huston et al., 2018; Tanner,

2006).

When used in a formative manner, simulation focuses heavily on the process of

learning where knowledge is gained and changed by experience, which is a focal element

of experiential learning (Kolb, 1984). Kolb’s (1984) experiential learning theory is a

20
constructivist model that informs the use of simulation in nursing education. The KELT

examines the learning process in a cyclical fashion that begins with a concrete experience

and moves through reflective observation (reflecting on the experience), abstract

conceptualization (learning from the experience) and active experimentation (learning

application). A simulation in nursing education provides the concrete experience

allowing practice, and post simulation debriefing provides time for reflection. Abstract

conceptualization occurs in the debriefing process where learners are supported in

exploring their simulation experience (Dreifuerst, 2009). Finally, the learning cycle

continues with active experimentation during debriefing, where the learners may be

challenged by the facilitator to apply what they have learned to different situations, or

after the simulation, where learners can apply and assimilate what they have learned in

future clinical situations. To develop clinical judgment, nursing educators provide

simulation opportunities to practice, reflect, transfer, and apply concepts of nursing

(Tanner, 2006).

Through constructivist simulation activities with debriefing, nursing students are

immersed in what Tanner (2006, p. 206) refers to as the “social embeddedness of nursing

knowledge,” which is obtained through observation of, and conversations with, other

nurses. Simulation and debriefing allow for the practice and learning of technical and

cognitive nursing processes with and from other healthcare providers in an authentic

environment, providing students with increased environment assimilation within both

physical and social contexts. Learners move through the experiential learning cycle

while becoming socially engaged in learning nursing knowledge, practicing clinical

reasoning, and exercising clinical judgments.

21
Bang and Medin’s (2010) reflections align with the sociocultural implications of

learning clinical judgment. As Bang and Medin (2010) assert, culture has a large impact

on learning and development. From the sociocultural perspective learning involves

understanding the “context in which the individual is interacting” in the environment

(Danish & Gresalfi, 2018, p. 36). When there is limited environmental immersion,

presenting sociocultural traditions may be misunderstood, or one’s decision making could

be influenced by their prior ways of knowing (National Academies of Sciences,

Engineering, and Medicine, 2018). In other words, the way a learner responds in any

given situation is partially dependent on the traditions of their social environments, and

these traditions may impact their personal cognitive processes (Cress and Kimmerle,

2018; National Academies of Sciences, Engineering and Medicine, 2018).

Applied to nursing education, it would be important to consider how a student’s

social environment and prior ways of knowing impact their clinical reasoning and clinical

judgment. For example, in the clinical setting, nursing students may encounter new

situations that challenge their current beliefs or understandings, leading to a conflict

which affects their comprehension of the current situation (Clapper, 2015). This could

cause inaccurate interpretations and affect their clinical reasoning. The skewed reasoning

may be transferred unto their clinical judgments, which could have negative

consequences for the patient. How to teach students to critically think to inform clinical

reasoning and clinical judgments is explored.

Teaching Clinical Reasoning and Clinical Judgment

Teaching nursing students clinical reasoning processes to make informed clinical

judgments is essential (Dickison et al., 2016). A notable reoccurring theme for teaching

22
clinical judgment development involves time to engage in experiences that involve

clinical reasoning components (Cioffi, 2000; Tanner, 2006; Victor et al., 2017; Victor,

2017). Tanner (2006) attributes the novice nurse’s immature thinking processes to their

beginning clinical judgment skills. Having more time in clinical practice, expert nurses

gather more information than novice nurses and recognize and collect more relevant cues

(Burbach & Thompson, 2014; Cioffi, 2000; Hoffman et al., 2009; Levett-Jones et al.,

2010). Novice nurses lack the ability to recognize and collect relevant cues, and they

attempt to identify a problem without acknowledging appropriate cues (Levett-Jones et

al., 2010). The novice nurse may think more methodically as they recall theoretical

knowledge that is readily available to the expert nurse. Readily available knowledge

allows the expert nurse to seamlessly integrate the theoretical knowledge with the

experiential knowledge gained in practice (Tanner, 2006). Such enhanced relevant cue

recognition allows the expert nurse to connect cues and predict what may happen to a

patient, allowing for earlier clinical reasoning application and therefore prevention of

worsening patient conditions.

Benner discusses the learning continuum of nurses from novice to expert,

asserting that clinical nursing expertise (involving exemplar critical thinking, clinical

reasoning, and clinical judgment) comes not just with time but with “…the refinement of

preconceived notions and theory by encountering many actual practical situations…”

(Benner, 1982, p. 407). While this notion of time to develop clinical judgment is often

considered over a long-term period (i.e., the duration of one’s career), it can also be

viewed from an acute period (i.e., during a course) (Hamers & Csapo, 1999). Engaging

learners in repeated similar situations increases cognitive frameworks and connections

23
that develop more effective thinking to lead to expertise (National Academies of

Sciences, Engineering, and Medicine, 2018). Victor et al. (2017) noted a pattern of

growth in clinical judgment following repeated clinical exposure in simulation over the

course of a semester, although the growth was not statistically significant. In another

study that measured clinical judgment over the course of the entire nursing program,

statistically significant growth in clinical judgment was observed (Victor, 2017). The

LCJR was used in both studies to evaluate clinical judgment. Victor (2017) reported

strong internal consistency of the LCJR (Cronbach’s alpha .92) and strong agreement

between faculty evaluators (K scores ranging from 0.73 to 0.89). Demonstrating

reliability, the LCJR has been used to evaluate clinical judgment, and repeated learning

experiences in nursing programs have shown to support clinical judgment growth

(Postma & White, 2015). In addition to conducting repeated learning experiences, there

are specific active learning (AL) strategies that have supported clinical judgment

development during the students’ time in their nursing programs.

Active Learning to Support Clinical Judgment Development

To expedite facilitation of nursing clinical judgment development during nursing

programs, nurse educators use AL strategies developed from constructivist approaches

(Docherty et al., 2018; Shin et al., 2015b; Tedesco-Schneck, 2013). In AL experiences,

the focus of instruction shifts from lecture-based strategies to cognitively engaging

activities. Such activities involve knowledge discovery, integration, transfer, and

application. They can lead to increased levels of understanding which can influence

future experiences (Chi & Wylie, 2014). Active learning activities are necessary for

clinical reasoning and clinical judgment development which requires thinking,

24
processing, and analyzing of situations (Candela et al., 2006; Drake, 2012; Stanley &

Dougherty, 2010).

Many different AL experiences are explored in nursing education to promote

clinical judgment development (Ayed et al., 2022; Costello, 2017; Fogg et al., 2020;

Jensen, 2013; Kinyon et al., 2021; Klenke-Borgmann, 2020; Lavoie et al., 2019;

Strickland et al., 2017; Victor, 2017). Of the research on AL experiences and its effect

on clinical judgment, much outcome data is based on perceptions (Fogg et al., 2020;

Jensen, 2013; Kinyon et al., 2021; Strickland et al., 2017). Though AL activities such as

concept maps and case studies have been found effective in supporting clinical judgment

development, these activities lack the appeal of technology of simulation (Blum &

Parcells, 2012). Simulation has undergone perhaps the most rigorous evaluation and is a

commonly used and studied AL activity used in nursing education (Blum & Parcells,

2012; Hayden et al., 2014). Active learning with simulation involves high-fidelity

manikins or standardized patients in the classroom, clinical, lab or virtual environments

(Andrea & Kotowski, 2017; Ayed et al., 2022; Bambini et al., 2009; Cazzell & Anderson,

2016; Fogg et al., 2020; Jensen, 2013; Lavoie et al., 2019; Strickland et al., 2017; Victor,

2017). Klenke-Borgmann, Cantrell, and Mariani (2020) conducted an integrative review

of the literature specific to simulation and its effect on clinical judgment. The integrative

review resulted in 24 total research articles including mixed methods (n = 1), quantitative

(n = 14) and qualitative (n = 9) studies. Many of the studies in their review provided data

about developing clinical judgment in specific areas of nursing (i.e., geriatrics, pediatrics,

cardiac, etc.) (Brown & Chronister, 2009; Johnson et al., 2012; Shin & Kim, 2014; Shin

et al., 2015a; Powell-Laney et al., 2012). Other studies showed that increased clinical

25
judgment occurs with engagement in multiple simulation experiences (Bussard, 2018;

Shin et al., 2015b; Yuan et al., 2014). Additionally, several studies involving the effects

of simulation debriefing on clinical judgment were recognized (Ashley & Stamp, 2014;

Dreifuerst, 2012; Kuiper et al., 2008; Lasater, 2007a; Lavoie et al., 2013; Mariani et al.,

2013). Klenke Borgman et al. (2020) also found that clinical judgment was measured

using a variety of tools and methods, including the LCJR (8), multiple choice format

exams (5), checklists (1) and thematic analysis of student interviews, journals, and focus

groups. Klenke-Borgmann et. al (2020) identified using simulation in the classroom and

interprofessional simulation experiences as gaps in the literature relating to simulation

and clinical judgment development. Notably there were no studies discussed in their

review that examined pre-briefing and the effect on clinical judgment development.

Since the integrative review completed by Klenke-Borgmann et al. (2020),

additional literature specific to simulation and its effect on clinical judgment has been

published. Simulation video observation and virtual simulation engagement seems to be

an emerging trend in the literature (Kelly et al., 2022; Klenke-Borgmann et al., 2021;

Kool, 2022; Pardue et al., 2023; Rogers & Franklin, 2022). A notable study from

Klenke-Borgmann et al. (2021) involved students observing simulation videos in the

classroom while completing a worksheet with questions pertaining to the four

components of clinical judgment. Analyzed with repeated measures analysis of variance,

the observation activity led to a statistically significant increase of noticing (F = 6.229)

interpreting (F = 8.147) and responding (F = 19.943) components of nursing students’

clinical judgment when evaluated in simulation using the LCJR.

26
While physical engagement is often referenced as AL, there is additional support

that learners can also be actively engaged in simulation as observers (Bates et al., 2019;

Berndt et al., 2015; Hober & Bonnel, 2014; Howard, 2021; Johnson, 2019; MacLean et

al., 2019; Norman, 2018; Reime et al., 2017; Rode et al., 2016). Observers can be

observing a live simulation or be observing a video simulation that was previously

recorded. Video recorded simulations (VRS) are often used in undergraduate nursing

education in debriefing (post-simulation) activities, but an unanticipated use of VRS

emerged in the spring of 2020 during the COVID-19 pandemic (Palancia Esposito &

Sullivan, 2020). At times during the pandemic, nursing students were not allowed to go

into various healthcare settings. Pre-recorded simulations were often observed online as

a substitute for in-person clinical experiences. While observing VRS became the quick

fix to support continued clinical learning in nursing education during the pandemic, prior

literature involving outcomes of VRS was often based on student perceptions of their

learning or satisfaction with the activity, and did not involve assessment of clinical

reasoning or clinical judgment (Ferguson & Estis, 2018; Herron et al., 2019; Powers,

2020; Williams et al., 2009). Outcome assessment pertaining to clinical reasoning and

clinical judgment with VRS remains limited, although clinical reasoning and clinical

judgment is frequently assessed with in-person simulation.

Assessing Clinical Reasoning and Clinical Judgment in Simulation

With simulation being the most reported AL experience in nursing education

research, assessment of clinical judgment in simulation is explored. Much of the

outcome data pertaining to clinical judgment development in simulation is based on

student perceptions and self-assessments (Andrea & Kotwoski, 2017; Bambini, 2009;

27
Fogg et al., 2020; Lavoie, 2019; Roy, 2016). The most common instrument used by

students to self-assess clinical judgment is the LCJR (Andrea & Kotwoski, 2017; Fogg et

al., 2020; Lavoie, 2019). Faculty assessment of clinical judgment is reported at different

phases of simulation and with many different instruments. Instruments used by faculty to

assess clinical judgment during simulation include the LCJR (Bussard, 2018; Coram,

2016; Johnson et al., 2012; Mariani et al, 2013; Reid et al., 2020; Shin & Kim, 2014,), the

Creighton Competency Evaluation Instrument (CCEI) (Hayden et al., 2014; Kidd, 2017;

Page-Cutrara & Turk, 2017), pre-test post-test exams (Powell-Laney et al., 2012), and the

Korean Nurses’ Core Competency Scale (Shin et al., 2015) which is a version of the

LCJR written in Korean language. Assessment of clinical judgment following a

simulation and debriefing was obtained with pretest-posttest methods (Dreifuerst, 2012),

qualitative student responses (Ashley & Stamp, 2014; Lasater, 2007a), and worksheet

evaluation (Kuiper et al., 2008).

With variable instruments and methods used to assess clinical judgment in

simulation, it is essential that assessment methods are evidence based (Polit & Tatano

Beck, 2008). Pre-posttest methods may examine cognitive knowledge, though they may

not be a practical way to measure actual clinical judgment, as extensive testing that

determines the efficacy, reliability and validity of faculty developed methods would be

necessary. Developing test questions that measure high order constructs (like clinical

judgment) is an extensive process involving item writing, review and evaluation by

multiple clinical subject and statistical analysis experts (Betts et al., 2019). Further,

while self-report has been the easy way to measure student perceptions of their actions, it

is imperative that we begin to measure clinical judgment with instruments that have

28
strong support theoretically and psychometrically, with instruments that have undergone

rigorous evaluation (Polit & Tatano Beck, 2008). The CCEI and the LCJR are commonly

used instruments to evaluate clinical judgment which have undergone rigorous

evaluation.

Creighton Competency Evaluation Instrument

One tool used to measure clinical performance in simulation that has

demonstrated validity and reliability is the Creighton Simulation Evaluation Instrument

(C-SEI) (Todd et al., 2008). The C-SEI was developed to provide means for quantitative

measurement of clinical performance, including assessment, communication, critical

thinking and technical skills. A total of 23 items are scored, with a score of one for

competent and a zero for not competent. Hayden et al. (2014) modified the C-SEI for use

in the National Council of State Boards of Nursing (NCSBN) study which supported that

simulation could replace clinical experiences up to 50% without impacting student

learning. In the modified instrument there were minor wording alterations, and the

critical thinking and technical skills categories were “changed to clinical judgment and

patient safety” to better align with current nursing practice standards (Hayden et al.,

2014, p. 246). The revised instrument is the CCEI and contains 23 items that are

organized into four categories, with nine items specifically correlating to clinical

judgment.

While the NCSBN study is well known within nursing simulation literature, other

researchers have utilized the CCEI to assess clinical competence in simulation as well

(Beman, 2017; Brennan, 2022; Kidd, 2017; Page-Cutrara, 2015; Raman et al., 2019).

The instrument has demonstrated psychometric validity and reliability (Hayden et al.,

29
2014; Manz et al., 2022). Hayden et al. (2014) obtained content validity from 35 expert

faculty who strongly agreed on the content of the tool (M = 3.86, SD = 0.22) and how

easily understandable it was (M = 3.78, SD = 0.27). Reliability was verified by

comparing faculty evaluations (using the CCEI) of three simulation videos that used the

same scenario with varying levels of clinical judgment proficiency. Rater agreement was

determined at 79.4 percent overall, with Cronbach’s alpha above .90. Manz et al. (2022)

determined content validity of the CCEI in the clinical practice environment by obtaining

survey responses (one [strongly disagree] to four [strongly agree]) from 31 clinicians

regarding the usability of the instrument (M = 3.41, SD = 0.50), it’s comprehensiveness

(M = 3.38, SD = 0.49) and ease of use (M = 3.35, SD = 0.55). Demonstrating validity and

reliability in both simulation and clinical practice, the CCEI evaluates multiple

components of clinical competence consistent with the Essentials of Baccalaureate

Education for Professional Nursing Practice (AACN, 1998).

Tanner’s Clinical Judgment Model and the Lasater Clinical Judgment Rubric

Another instrument used for assessment related to clinical competence is the

LCJR, which aligns with Tanner’s CJM. Tanner’s (2006) CJM serves two purposes: 1)

to describe the clinical judgment process as it occurs in practice and 2) to support nursing

faculty and students in identifying areas for clinical judgment growth. According to the

CJM, clinical judgment is influenced heavily by a nurse’s prior experiences (inside and

outside of the clinical setting). While an experienced nurse’s ability to engage in the

clinical judgment process is more fluid than a novice nurse (who lacks clinical experience

to influence their interpretations and responses to a clinical situation), the CJM is

applicable to nurses at all levels of experience. The CJM shows the reasoning pathways a

30
nurse uses in complex clinical situations, including noticing, interpreting, responding, and

reflecting. How the clinician notices, interprets, responds, and reflects on a clinical

situation is affected by their background, environmental contextual factors (including

norms followed in the workplace), and the nurse/patient relationship. When noticing,

interpreting, and responding, the nurse is thinking in action during the clinical situation,

and they are thinking on action during the reflection phase. While going through clinical

reasoning processes (i.e., noticing, responding, interpreting) “the nurse must be cognizant

of the patient’s need through data or evidence, prioritize and make sense of the data…

and come to some conclusion about the course of action” (Lasater, 2007b, p. 497). These

conclusions are clinical judgments.

Lasater (2007b) created a tool to assess clinical judgment in simulation. While

Lasater’s (2007a) earlier research used qualitative student data to demonstrate the

effectiveness of simulation to enhance clinical judgment, they ultimately designed a tool,

the LCJR to provide a quantitative measure of clinical judgment and a means to

operationalize Tanner’s CJM (Lasater, 2007b).

The LCJR provides the nursing student a clear outline of simulation expectations

(pertaining to clinical judgment) and is a tool for nursing educators that supports

productive learning conversations and evaluations about the simulation experience

(Lasater, 2007b). The LCJR (2007b) focuses and expands on the four components of

clinical judgment (noticing, interpreting, responding, and reflecting) with 11 total

correlating descriptors. Each descriptor has associated developmental levels: beginning,

developing, accomplished and exemplary. Total clinical judgment scores range from 11-

44, and points are awarded to each descriptor as follows: one point for beginning clinical

31
judgment, two points for developing clinical judgment, three points for accomplished

clinical judgment and four points for exemplary clinical judgment.

Psychometric validity and reliability of the LCJR has been established (Adamson

et al., 2011; Adamson & Kardong-Edgren, 2012; Chmil & Larew, 2013; Lasater, 2007b;

Shin & Kim, 2014; Strickland et al., 2017). Adamson et al. (2012) reviewed the

reliability and validity data from three different sources: interrater reliability was

confirmed by Adamson et al. (2011) with an interclass correlation agreement (0.889) and

96% agreement between raters when comparing mean pre-posttest scores; Sideras (2007)

demonstrated construct validity (regardless of the level of student experience) with a

large effect size between student groups; and Gubrud-Howe (2008) also demonstrated

interrater reliability (r = 0.92 to 0.96). These studies reviewed by Adamson et al. (2012)

support the use of the LCJR when measuring clinical judgment in high-fidelity simulation

environments with variable evaluators, simulation cases and level of

learners. Additionally, Victor-Chmil and Larew (2013) provided an extensive review of

evidence pertaining to the validity and reliability of the LCJR. They identified published

peer reviewed journal articles and supportive gray literature and confirmed well

established content validity. Victor-Chmil and Larew recognized, though, that the

established reliability data is supported only when used with the undergraduate nursing

student population. They proposed additional research to establish reliability with

graduate students and practicing nurses. As a valid and reliable tool to assess clinical

judgment of undergraduate nursing students in simulation, the LCJR has also been

adapted for use in international research (Kim et al., 2016; Perbone Nunes et al., 2016;

Román-Cereto et al., 2018; Shin et al., 2015a; Vreugdenhil & Spek, 2018).

32
The LCJR has been used to study clinical judgment in a variety of ways, one

being to explore the correlation of a student’s self-assessment of clinical judgment when

compared to nursing faculty’s assessment (Jensen, 2013; Strickland et al., 2017). Jensen

(2013) found that students and faculty score similarly in clinical judgment, but students

often rated themselves higher than faculty, with just three (out of 11) significant

relationships in ratings. Strickland et al. (2017) also noted similar clinical judgment

assessments between nursing students and nursing faculty, with a small positive

correlation (n = 94 [students] and n = 1 [faculty], p = 0.03, r = .314, Cronbach’s alpha =

0.82). Though statistically significant, the variance accounted for is low, indicating a

weak correlation (Munro, 2005). While both Jensen and Strickland report an agreement

between student and faculty scores, they both note that the students did score themselves

slightly higher than faculty, on average, and discrepancies were thought to be related to

the students’ lack of clinical judgment understanding and evaluation experience. The

minor differences in scores from student and instructor did not interfere with the use of

the LCJR as a tool to stimulate a shared dialogue about the simulated clinical

experiences, but student self-assessment alone, despite use of a validated instrument, may

not provide an accurate assessment of their clinical judgment (Jensen, 2013). Student

support in understanding clinical judgment and the use of the rubric is warranted

(Strickland et. al, 2017).

The LCJR has been used to assess clinical judgment in several other research

studies. Bussard's (2018) research using the LCJR determined a competency score of

clinical judgment based on specific end of program outcomes, and they stressed the need

to determine such scores if the rubric is used for high stakes purposes. Adamson (2016)

33
evaluated the effect of race/ethnicity bias on rater scoring of clinical judgment with the

LCJR, finding no significant impact (p = .753), further supporting its validity. Victor et

al. (2017) used both the CSEI (performance) and the LCJR (judgment) to examine the

relationship between clinical judgment and performance in both the simulation and

hospital setting. Significantly positive relationships between clinical judgment and

performance (r = 0.79, p <.001) and between simulation and clinical performance (r =

0.87, p <.001) were noted. Similarly, Reid et al. (2020) used the LCJR to evaluate

nursing student clinical judgment in both simulation and hospital settings, finding no

statistically significant difference (p = 0.295) between students who experienced clinical

rotations involving only simulated experiences and students who experienced clinical

rotations in the hospital setting. While clinical judgment is more commonly assessed in

simulation and clinical environments, the LCJR has also been used to establish a clinical

judgment assessment in the virtual simulation environment and with journaling responses

(Bussard, 2015; Fogg et al., 2020).

Many studies that have used the LCJR assessed clinical judgment progression as

the student engages in simulation and clinical experiences over time (Blum et al., 2010;

Bussard, 2018; Chmil et al., 2015; Fawaz & Hamdan-Mansour, 2016; Leijser & Spek,

2021; Shinnick & Cabrera-Mino, 2021; Victor et al., 2017). Shinnick & Cabrera-Mino

(2021) explain that a predictor of clinical judgment is years of experience, suggesting that

“educators should not expect large improvements in a student’s clinical judgment skills

until the student has further clinical experience as a nurse” (p. 109). Notably, these

conclusions contradict the goal of educators who strive to prepare graduating nursing

students with advanced critical thinking, clinical reasoning and clinical judgment skills to

34
provide safe care (Ashcraft et al., 2013). Despite the notion that clinical judgment takes

years to develop there is some evidence of growth in short periods of time if learners are

provided experiences to foster clinical judgment development (Victor et al., 2017; Victor,

2017).

LCJR versus CCEI

Heavily used, the LCJR and the CCEI are valid and reliable instruments for

clinical judgment assessment in undergraduate nursing students. These instruments have

been used to evaluate outcome competencies, examine relationships between clinical

judgment and clinical performance, and assess clinical judgment growth. The CCEI is a

flexible instrument for clinical competence assessment, including clinical judgment.

With several components of the CCEI measuring clinical competence, only nine

components relate to clinical judgment specifically. The CCEI provides a measure of

overall clinical judgment, though it does not determine the level of competence regarding

the varying degrees of clinical judgment. The LCJR not only measures overall clinical

judgment, it measures clinical judgment explicitly and differentiates between varying

degrees of clinical judgment competence ranging from beginning to exemplary.

Despite the availability of instruments such as the LCJR to assess clinical

judgment following simulation, the research scarcely focuses on evaluating the effect of

specific components of simulation to enhance clinical judgment of nursing students. In

other words, clinical judgment scores are assessed post simulation, but the entire

simulation itself was the intervention. Research assessing the impact of specific

components of simulation is needed, especially the understudied pre-brief period.

35
Scaffolds

Components of simulation that support clinical judgment development can be

referred to as scaffolds. The concept of scaffolding is well known in learning sciences

and is applicable to the ZPD (the space where the learner could accomplish tasks/develop

skills if supported appropriately) (Belland, 2014). Some scholars argue scaffolding has

been misinterpreted in reference to Vygotsky’s theory (Smagorinsky, 2018). Vygotsky

(1978) defined the ZPD as “the distance between the actual developmental level as

determined by independent problem solving and the level of potential development as

determined through problem solving…” (pp. 86-87). Educators have assumed literally

that the learner will indeed accomplish short term gains (as opposed to the long-term

gains Vygotsky’s theory proposes) if they are provided adequate support (Smagorinsky,

2018). But Vygotskian theory and the ZPD supported learning and development that

would stay with learner and continue to grow when applied over time (Smagorinsky,

2018). Essentially, the goal of the scaffold with the novice learner is to encourage

independence within a certain social environment, and the scaffold serves as a mediator

(Boblett, 2012). Applying the learning sciences concept of scaffolds to nursing

education, a scaffold may be implemented to move the learner through the ZPD and

mediate clinical reasoning and clinical judgment development.

While Vygotskian theory is supported by sociocultural foundations, the ZPD

emphasizes individual assessment of learners to allow the instructor to tailor educational

activities within shared and personal social context, and when designing scaffolds it is

necessary to consider the learner’s current development to avoid over-simplifying or

over-complicating a concept (Horton, 2008; Nguyen, 2017). Determining this sensitive

36
area is key to facilitating cognitive growth, competence, and independence so the scaffold

can eventually be removed. With the goal of independence, scaffolds have shown to

improve a number of specific skills across many disciplines, including cultural

competence (Lujan & Vasquez, 2010), writing skills (Motlhaka, 2020), providing

feedback (Barnard et al., 2015), problem solving skills (Tchounikine, 2019), language

skills (Takahashi, 1998), musical development (de Vries, 2005), and teaching (Bliss et

al., 1996; McCullagh, 2012).

While scaffolds are most notably referenced in regard to child learning and

development, Vygotsky’s methods are also used in baccalaureate programs like

education, engineering, dentistry, accounting, pharmacology, physiotherapy, and science

(Brandenburg, 2021; Devereux & Wilson, 2008; Hardman & Ng’ambi, 2003; Korpi et

al., 2019; MacLeod & van der Veen, 2020; Meijerman et al., 2016; Neville, 2018;

Nichols & Nichols, 2006; Roberts, 2018; Rotsaert et al., 2018). Hardman & Ng’ambi

(2003) report usage of a computer-based scaffold they designed for learners in a Bachelor

of Education program, and findings from a study by Rotsaert et al. (2018) support

anonymous peer feedback in another Bachelor of Education Studies program. Additional

scaffolds used in Bachelor of Education environments include reflection (Brandenburg,

2021; Roberts, 2018) and engaging reading assignments (Devereux & Wilson, 2008).

Nichols & Nichols (2006) report scaffolding strategies to empower American Indian

students to be successful in baccalaureate programs in biology, family, agricultural, and

consumer sciences. Scaffolds have also been used to improve motivation and promote

reflective practice (to support professional growth) for dental students (Meijerman et al.,

2016; Neville, 2018). Self-reflection and peer group reflections (with instructor support)

37
were successful scaffolds in physiotherapy education (Korpi et al., 2019), and scaffolds

(specific coursework strategies and faculty oversight) were documented in an

interdisciplinary team project for students from mathematics and engineering (MacLeod

& van der Veen, 2020).

Used in a variety of disciplines, scaffolds must be carefully integrated in

education design to enhance individual learner knowledge and skillsets (Coombs,

2018). The effects of scaffolds vary, but a cohesive notion is that intentional scaffolds

can promote successful outcomes where the learner progresses towards independent

conceptual application (Coombs, 2018). Eventually, the learner will be in a place where

the scaffold is no longer needed (Boblett, 2012). With the ability to support varying

learner educational levels and outcome goals, scaffolding practice is also present in

nursing programs.

Scaffolds in Nursing Education

An examination of the literature related to scaffolds in nursing education shows

an emergence of scaffold use in didactic nursing education environments to assist

learners with both affective and cognitive functions. For example, the use of art was used

as a scaffold to build self-awareness, providing learners the opportunity to examine their

identity and what nursing meant to them (Hydo et al., 2007). Students were provided

with question prompts and participated in small group discussions to promote reflective

thinking and then independently created an artistic representation of what a nurse is in

their view. Through qualitative inquiry, Hydo et al. deduced that the artistic creations

helped them gain self-awareness. The art scaffold supported heightened awareness that

the students may not have been able to achieve on their own. Another example of a

38
specific scaffold in a didactic nursing setting is the use of a social media platform

(Mistry, 2011). Connecting faculty and students on Twitter (now known as ‘X’) allowed

for supportive interactions that enhanced learning and reflection (Mistry, 2011).

Additionally, scaffolds were used to aid in the development of nursing students’ abilities

to evaluate credible research and enhance technical writing skills (Sakraida,

2020). Sakraida (2020) used three different scaffolds (exemplar articles, exemplar

appraisals, and guided written reflections) over a course semester to help learners achieve

independence in gathering and evaluating credible research.

Further research on scaffold approaches used in the nursing classroom setting

appears sparce. Arguably, scaffold practices toward development of nursing competence

are present in nursing classrooms but are not specifically referred to as scaffolds.

Examples of interventions in nursing classrooms that are not explicitly called scaffolds

include mindfulness sessions (to reduce stress and anxiety), interactive puzzles (to

promote safe medication administration), group debates and video recorded simulations

(to enhance critical thinking skills) (Hadenfeldt et al., 2021; Leslie et al., 2020; Nurakhir

et al., 2020; Sharpnack et al., 2013). Whether through stimulating engagement, creating

awareness, reinforcing knowledge, or providing demonstrations, these activities were all

conducted to support independent development of critical nursing skills. They were

completed with the aid of other persons or tools to guide them toward achievement, thus

fitting the scaffold paradigm. Another space in nursing education where scaffolds are not

explicitly defined as such is simulation. One of the leading scaffolds used in simulation

is debriefing, which is done after the simulated clinical encounter.

39
Post-Simulation Scaffold: Debriefing

Following a simulation experience, post simulation debriefing aligns with the

conceptual definition of an educational scaffold. Debriefing is led by a trained facilitator

to promote a reflective collaboration regarding the simulation experience (Lioce, 2020).

The debrief scaffold of simulation has been highly studied and has demonstrated many

benefits that support essential nursing skill development. While the simulation debrief is

not the focus of this dissertation, a few examples are shared to demonstrate how

debriefing is a scaffold. For example, debriefing allows for an emotional release to

support strengthened reflection and encourage reflective thinking (Dreifuerst, 2009;

Tutticci et al., 2018). The reflective thinking in debriefing promotes reflective practice,

which aids learners in assessing their own learning needs, an essential component of

nursing practice (Benner et al., 2010). Additionally, a structured debrief after simulation

provides collaborative dialogue to support content understanding and promote

skill/knowledge transfer and can help learners to develop critical thinking and critical

reasoning skills necessary for clinical judgment (Decker et al, 2021; Dreifuerst, 2009;

Hines & Wood, 2016). A seasoned debriefer can help students take what was done in

simulation, apply and assimilate it to different situations, and build schema of nursing

practices (INACSL Standards Committee, 2021). Conceptualized as a Vygotskian-rooted

meaning of a scaffold, debriefing in nursing simulation provides the novice learner with

an expert guide to help them reflect and develop essential nursing practices including

critical thinking, clinical reasoning, and clinical judgment.

40
Intra-Simulation Scaffolds

In addition to debriefing, there are some examples of scaffolds used during the

simulated encounter, or intra-simulation (Andrea & Kotowski, 2017; Holland, 2020;

Najjar et al., 2015). Using a grounded theory approach, Najjar et al. (2015) explored

nursing student’s experiences of intra-simulation scaffolds. Nursing students perceived

their learning was enhanced while observing their peers in simulation, and that watching

others during a simulation helped increase knowledge and skill development (Najjar et

al., 2015). Students also perceived that working with a peer intra-simulation provided

supportive collaboration which led to expedition of necessary nursing interventions.

Standardized patients (SPs) (actors portraying a patient in the simulation) are also intra-

simulation scaffolds. The use of SPs promoted empathetic care in novice nursing

students by fostering authenticity and creating more intimate interactions (Holland,

2020). Standardized patients who offered positive feedback increased nursing students

self-assessed clinical judgment (Andrea & Kotowski, 2017). Sometimes the feedback

provided by an SP is a particular kind of intra-simulation scaffold referred to as

cueing. Cueing in simulation, though, is a complex concept that has multiple purposes

and can come from sources other than the SP and is explored further.

Cueing. Cueing, as a scaffold in simulation, provides instructional support that

helps the learner achieve the simulation objectives (conceptual cues) or helps the learner

interpret the fidelity of the simulation (reality cues) (Paige, 2016). Importantly, cues used

in simulation are intended to be supportive, and they must not “interfere with [the

learner’s] independent thought” (Jeffries & Rogers, 2007, p. 29). A cue can emerge from

41
the equipment, environment or from character (i.e., SP, patient, embedded participant)

responses (Jeffries, 2005).

Available literature on cueing in nursing simulation is scarce. Poledna, Gomez-

Morales and Hagler (2022) completed a scoping review of literature on how nursing

students recognize cues in simulation and found that most studies (n = 16/17) examined

the relationship between cue recognition and patient deterioration. Missed cue

recognition leading to a decline in patient status was the most prominent theme noted.

Additional literature available on cueing in simulation more commonly reflects defining

what cueing is in simulation education (Jeffries, 2005; Alessi, 2000; Adames et al., 2008;

Dieckmann et al., 2010). There are some studies that discuss unintended negative

outcomes of cueing. Though intended to be supportive, cues in simulation can be

distracting to learning and it is important that both reality and conceptual cues are

planned, piloted and intentional so as not to negatively influence learning (Biggs et al.,

2021; Jeffries, 2015; Paige & Morin, 2013; Paige & Morin, 2016). Escher et al. and

Adams et al. (in Paige & Morin, 2013) found that cueing during simulation can

negatively impact student team communication and collaboration, and when cuing is

inconsistent between simulations it can lead to misunderstandings and impact the

learners’ performance. Repeated misunderstandings that lead to learning disruptions can

affect the learners’ buy-in of learning with simulation activities. Other undesirable

outcomes of conceptual cue use include over controlling or interfering with students’

learning processes. Biggs et al. (2021) recognized how bias, formed as a result of

unintentional cueing in lethal force training, can impact future decision making. Biggs et

al. (2021) asserted that learners would begin to depend on cues that would not be present

42
naturally and that unintended cues can create a predictive tendency that learners may

apply in real life situations. The absence of an unintended cue that was present in

simulation, then, may impact the noticing of and responding to patient conditions, affect

decision making processes (clinical reasoning), and ultimately impact clinical decisions

(clinical judgments) in real clinical situations.

To prevent any disruption in the learning process, scaffolds can be implemented

to support recognition skills of intended cues. Cue recognition is influenced by

familiarity (of medical conditions) and cue recognition instruction (Burbach &

Thompson, 2014; Thiele et al., 1986). Thiele et al. (1986) determined that carefully

designed computer-based simulation scaffolds can have a significant positive impact on

learners’ ability to recognize and sort cues related to patient situations. Burbach &

Thompson (2014) determined that nurses in clinical practice recognize and interpret cues

to effectively identify, interpret and respond to patient needs accordingly, or make

clinical judgments. Unfortunately, current literature on cue recognition indicates that

noticing and responding to cues in simulation is a great challenge for nursing students.

(Poledna et al., 2022). Teaching clinical judgment, beginning with noticing prominent

cues, is critical to prevent poor decision making (clinical reasoning) and clinical

judgments. Despite the importance of cue recognition in patient care, additional research

involving scaffolds in simulation to specifically support cue recognition is lacking. The

following discussion involves reports of scaffolds in simulation pre-brief to support

clinical reasoning and clinical judgment development.

43
Pre-Brief Scaffolds

Pre-briefing is a critical component of simulation and helps learners prepare for

the simulated encounter (McDermott et al., 2021). Pre-brief scaffolds, within the

conceptual definition in this dissertation, are temporary supports for developing a deep

understanding that facilitates conceptual application and transfer. It is noted that some of

the literature involving pre-briefing describes interventions which may or may not be

considered scaffolds, as it is not always apparent if the intervention leads to application

and transfer of nursing concepts or increased independence in practice. When

scaffolding is conceptually apparent in nursing education literature the term ‘scaffold’ is

not abundantly used and is often referred to as an activity, strategy, or resource.

Regardless of the term used, pre-brief design is considered vital to successful simulation

implementation (McDermott et al., 2021).

Despite the importance of pre-briefing, a review of the literature suggests it is an

understudied component of simulation, and much of the current literature examines

learner perceptions of pre-brief concepts. Kim et al. (2017) studied the effects of pre-

briefing on student’s perceived flow of the simulation that followed, while Solli et al.

(2020) explored student perceptions of the overall role of the pre-brief facilitator. Other

researchers have explored student perceived effects of pre-brief on learning and

confidence (Chamberlain, 2017), anxiety (Wheeler et al., 2021), and effectiveness

(Anderson et al., 2022). There are two studies which examine the learners’ perceived

effect of pre-brief (fictional contracts) on psychological safety (Stephen et al., 2020; Roh

& Jang, 2017). Notably, the evaluation criteria involved only student perceptions, and

such self-rated perceptions are highly subjective (Polit & Beck, 2008).

44
While subjective student perceptions are valuable in providing a more holistic

view when analyzing outcome data, limited other research has been conducted using

more objective measures to explore the effect of pre-briefing scaffolds. In Jarvill’s

(2021) study, students in the treatment group viewed a video of an expert role model

completing the psychomotor skill they would complete during the upcoming simulation.

Students in the treatment group received higher performance ratings on a faculty

developed skills checklist than students who did not view the video in the pre-brief.

While no reliability or validity was offered for the checklist it was modeled after a skills

assessment book used by the institution and there was one trained rater, blinded to

participant assignment, who evaluated the skill performances to eliminate concerns for

inter-rater reliability. Beman (2017) also used objective measures to evaluate the effect

of pre-brief scaffolds. In Beman’s (2017) study, the CCEI was used to evaluate clinical

competence (including knowledge, attitudes, skills, clinical reasoning, and clinical

judgment) between groups that received different pre-brief scaffolds prior to

simulation (Notarnicola et al., 2016). Regardless of the scaffold employed (standard pre-

brief, care planning, or concept mapping), there were no significant changes in clinical

competency between groups. There were, however, significantly different scores of

clinical competencies between participants in the two simulation scenarios used. This

result was not unexpected since the interrater reliability was statistically different (Kappa

= 0.096, p = 0.02). Beman also posited the varying scores may be due to faculty

simulation facilitators who provided different levels of cueing during simulation.

Other research on pre-brief scaffolds has explored their effect specifically on

clinical judgment (Page-Cutrara & Turk, 2017; Daley et al., 2017; Sharoff, 2015). Page-

45
Cutrara & Turk (2017) applied reflection theory and concept mapping activities to

provide a structure to simulation pre-brief and used the CCEI to evaluate clinical

judgment during simulation. A statistically significant difference in clinical judgment

(F(1,73) = 74.0, p < 0.001) between the treatment group (students receiving the

structured pre-brief) and the control group was found. Daley et al. (2017) also assessed

clinical judgment during simulation following implementation of concept mapping as a

pre-brief scaffold. Student simulation performances were analyzed by the researchers

and behaviors were coded in alignment with components of Tanner’s CJM (noticing,

interpreting, responding, and reflecting). Mean scores of clinical judgment behavior

increased only for noticing behaviors (control mean=4.2, experimental mean=7.1),

however all remaining components of clinical judgment (interpreting, responding, and

reflecting) were unchanged. While noticing behaviors appeared to increase, no measure

of significance was provided between the two groups. Daley et al. (2017) did recognize

that the raters in the study were not blinded to the groups, causing a limitation relating to

the validity of the intervention. Sharoff (2015) explored a variety of pre-brief scaffolds

(i.e., pathophysiology review, images, videos, and handouts), and analyzed student

written responses (post simulation) to questions that were developed by the researcher

using the Tanner’s CJM as a guide. Sharoff evaluated the students’ written reflections

with the LCJR and concluded that students perceived their clinical judgment was

enhanced with the use of a variety of pre-brief scaffolds. No reliability or validity

information was shared regarding the researcher developed reflection questions, and

using the LCJR to assess CJ via student written reflections remains underreported.

46
Two additional studies examined the effect of a pre-brief scaffold on nursing

students’ clinical judgment. In both Johnson et al.’s (2012) and Coram’s (2016) studies,

faculty evaluated the students using the LCJR and found improved clinical judgment

scores after viewing of a VRS (with an expert role model performing the simulation)

during pre-brief. Coram reported the validity and reliability of the LCJR from other

previous investigators (Adamson et al. (2012) and Victor-Chmil and Larew (2013)),

though reliability of the LCJR specifically in Coram’s study was not reported. Evaluated

by expert faculty, the treatment group in Coram’s study scored significantly higher (p =

.00) in clinical judgment, receiving scores in the ‘developing’ category, while the control

group received scores in the ‘novice’ category. Johnson et al. (2012) also noted

significant differences in clinical judgment between groups, reporting a large effect size

(Cohen’s d ≥ 1.3), with the treatment group scoring higher in the noticing, interpreting,

and responding components of clinical judgment. While the nurse in the pre-brief video

would express their thoughts out loud in Johnson’s study, the students were handed a

document to read the nurse’s points of clinical reasoning and clinical judgments in

Coram’s study. Though the scaffold differed slightly between the studies, students in the

treatment groups demonstrated enhanced clinical judgment in the simulation that

followed the pre-brief with the VRS.

The prior discussion presents pre-briefing simulation research involving a variety

of different interventions, outcomes, and research designs. With just five studies noted in

the literature to evaluate pre-brief scaffolds’ effect on clinical judgment, and the

inconsistency of research measurements and outcomes, progress in developing consensus

on best practice pre-briefing to support clinical judgment during simulation is lacking

47
(Dileone et al., 2020). Of the five studies that examined the effect of pre-brief scaffolds

on clinical judgment, the type of pre-brief scaffolds varied, and the measurement of

clinical judgment has not been consistent. However, Johnson et al. and Coram

investigated a similar intervention (a VRS scaffold) and instituted design methods (i.e.,

blinded reviewers) to enhance the strength of their studies (Polit & Tatano Beck, 2008).

Only Johnson et al., however, explored the intervention at multiple study sites. Despite

Johnson et al.’s effort to obtain a diverse sample from multiple study sites, a convenience

sample population was used, limiting generalizability of the results. While

generalizability is limited, the similar design, intervention, and results of Jonson et al.’s

and Coram’s studies offer support of the pre-briefing scaffold of expert role model

observation in a VRS prior to simulation participation to increase clinical judgment.

Of note, in both Johnson et al.’s and Coram’s studies, two student nurses

participated together in the simulation following the pre-brief scaffold of expert role

model observation in a VRS. While only the lead student nurse’s clinical judgment in the

simulation was evaluated, it is not apparent if the supporting student nurse influenced the

lead student nurse’s clinical reasoning. It also remains unclear if the simulation operator

provided unintentional cues during the simulation experiences. Without this information,

it remains unclear if the pre-brief scaffold of expert role model observation in a VRS

influenced the change in clinical judgment, or if other factors were involved. To direct

best practices in simulation pre-briefing to increase independent clinical judgment

development, there is a need to build on the existing data involving the pre-brief scaffold

of expert role model observation in a VRS. The PELS-CJI framework supports the use of

48
an I-VRS during pre-briefing to facilitate increased independent clinical judgment

development.

Summary

This literature review has presented an exploration of learning, teaching, and

assessing clinical judgment. From this review, it is apparent that learning and teaching

clinical judgment is complex. Nurse educators use AL strategies developed from

constructivist and sociocultural approaches to facilitate nursing students’ clinical

judgment development. The use of simulation is a dominant AL strategy in research on

clinical judgment development.

In several studies that measure clinical judgment in simulation learning

experiences, the researchers often present qualitative data (from the student’s or their

own personal perspective). The CCEI and the LCJR offer a means for quantitative

measurement of clinical judgment in simulation. While the CCEI measures additional

components of clinical competency, the LCJR explicitly measures the outcomes of

clinical judgment and differentiates between varying levels of clinical judgment

competence.

With tools to assess clinical judgment in simulation learning experiences, the use

of simulation scaffolds to support clinical judgment development warrants further

exploration. Scaffolds are teaching strategies that can strengthen learning and growth,

supporting the learner through the ZPD as they achieve independence. Intentional cueing

and debriefing are common scaffolds in nursing simulation. It is important to recall that

these scaffolds are used during and after participation in a simulation experience, leaving

independent growth to be determined in subsequent activities or clinical practice. Based

49
on the PELS-CJI, a scaffold may additionally be situated prior to simulation

participation, allowing the opportunity to support application of clinical reasoning skills

and promote clinical judgment independence during the simulation encounter.

One scaffold to support the learner before simulation participation is an I-VRS

during pre-brief. Observation of a VRS prior to simulation has shown to increase nursing

student clinical judgment, though the studies that support this scaffold do not discuss the

effect of the VRS on the degree of independent clinical judgment. With expert clinical

judgment development over time a dominant theme in the literature, herein lies an

opportunity in nursing education research. Additional research involving teaching and

learning strategies with simulation that will support the student through the ZPD to

enhance their clinical reasoning skills and promote independent, enhanced nursing

clinical judgment is warranted.

Informed by an extensive review of the literature, it was hypothesized that a pre-

brief scaffold (I-VRS of an expert role model that includes AL principles of prompting,

exploration, and student reflection) would increase clinical reasoning development

leading to enhanced clinical judgment and independence in nursing clinical judgment

decision making. The following chapter describes the methodology used to test the

hypotheses, and Chapter IV and V address the study results and implications,

respectively.

50
Chapter III: Methods

The purpose of this chapter is to describe the methodology for this quantitative

research study regarding a simulation pre-brief scaffold to assist learners in moving

through their ZPD toward clinical judgment independence. The research plan is

presented including design, setting, selection of participants, protection of human

subjects, instruments used, research questions and null hypotheses, data collection, and

data analysis procedures.

Design

This dissertation research involved quantitative experimental design to provide

powerful data relating to the research questions. Quantitative measures provide data

regarding variable relationships (Polit & Tatano Beck, 2008). Because the purpose of

this study is to identify the efficacy of a simulation pre-brief to support clinical judgment

development and independence in clinical judgment, a quantitative approach was

appropriate. A true experimental design involves not only manipulation of the

independent variable, but also randomization of subjects and a control group (Polit &

Tatano Beck, 2008). To achieve randomization and avoid systematic bias, participants

were randomly assigned group placement. Participants were involved in the simulation

on their assigned clinical day, but a code generator was used to randomly assign

participants to either the control (1) or treatment (2) group upon arrival at the simulation

lab. The code generator was the Android app Random Code Generator to support equal

distribution of participants to groups.

51
Setting

The study took place at a Commission on Collegiate Nursing Education (CCNE)

accredited college of nursing at a midwestern university in the United States. The

simulation lab in the college of nursing adheres closely with the International Nursing

Association of Clinical Simulation Learning (INACSL) healthcare simulation standards

of best practice.

Selection of Participants

A convenience sample of Bachelor of Science in nursing students was used for

this study. The convenience sample included two cohorts of nursing students (traditional

Bachelor of Science in nursing [TBSN] and accelerated Bachelor of Science in nursing

[ABSN]). All participants were in their senior year in the nursing curriculum.

Inclusion/Exclusion Criteria

Inclusion criteria involved senior level students who have had prior high-fidelity

simulation experiences. To avoid prior exposure to the study specific simulations and

increased exposure to class content, an exclusion criterion included any student who was

repeating the course. A yes/no question in the demographic survey asked if this was their

first time enrolled in the course. If answered ‘no’ the student was not observed for the

study and data was not included, though they participated accordingly as all other

students.

Protection of Human Subjects

This research study involved student participants to inform a strategy and future

research in nursing simulation education. The Associate Dean for Research at the

selected college of nursing was contacted via written correspondence to obtain approval

52
for study implementation. Final approval was provided from the college of nursing’s

research review board, the university study site’s IRB, and the Indiana University IRB.

All potential participants were emailed a link to the study consent upon arrival to

the simulation lab, with options to either agree or decline to participate in the study. If

participants agreed to participate in the study, they were automatically directed to

complete a demographic survey powered by Qualtrics software. To protect

confidentiality all data obtained from Qualtrics reports was securely stored in the cloud

secured data Microsoft Teams drive. While participants who declined to participate were

required to complete the simulation activity as part of the course requirements, no

collection of data occurred on participants who did not consent. Participant privacy was

maintained, and all data was deidentified. Upon arrival to the simulation lab, a teaching

assistant provided each student with an individual assigned participant identification

number and oversaw the students enter their assigned participant identification number in

the survey consent. The teaching assistant correlated the assigned participant

identification number to the associated student case in CAE LearningSpace, the software

used to securely record the simulations. The researcher was only granted access to the

student’s assigned participant identification number and correlating video.

Recruitment Procedures

Participant recruitment and retention was not an anticipated concern as the

intervention occurred during a regularly scheduled course simulation event. The

researcher presented the study to all students enrolled in the course one week before the

scheduled simulation. To avoid coercive bias, the class instructor was not present when

participants were informed of the research study. Students were provided the opportunity

53
to consider participation and ask any questions. Students were informed that there were

minimal risks to participating in this study, and that participation or lack of participation

would not impact the student’s course grade. With minimal risk to participants, IRB

exemption was obtained. The accessible population was n = 84, and the actual sample

population was n = 67. The sample population of the TBSN cohort was n = 36. The

sample population of the ABSN cohort was n = 31.

Instruments

The LCJR was used to measure participant clinical judgment, one dependent

variable in this study. Clinical judgment refers to the nurse’s decisions and how they

respond to the patient’s needs based on the clinical situation. In this study clinical

judgment was operationalized as the total score for clinical judgment and sub scores for

noticing, interpreting, and responding on the LCJR. The reliability and validity of the

LCJR was previously presented in chapter two.

The LCJR focuses and expands on the four components of clinical judgment

(noticing, interpreting, responding, and reflecting) with 11 total correlating

descriptors. Each descriptor has associated developmental levels: beginning, developing,

accomplished and exemplary. Total clinical judgment scores range from 11-44, and

points are awarded to each descriptor as follows: 1 for beginning clinical judgment, 2 for

developing clinical judgment, 3 for accomplished clinical judgment and 4 for exemplary

clinical judgment (Lasater, 2007b). Overall scores are considered beginning (11),

developing (12-22), accomplished (23-33) or exemplary (34-44). To examine the effect

of the intervention on clinical judgment in simulation, all data was collected prior to the

simulation debriefing. To assess the reflecting component of clinical judgment, it would

54
have been necessary to require participants to think out loud during the simulation

experience. Since requiring thinking out loud during simulation can decrease the

authenticity of the simulation environment, the reflecting component of clinical judgment

was not measured in this study and only the noticing, interpreting, and responding

components of clinical judgment were scored (Burbach et al., 2015). Clinical judgment

scores were thus adjusted by decreasing the maximum score by the maximum points that

would have been rewarded from the removed descriptors of the reflecting component

(four points for each of two descriptors, for a total of eight points), and decreasing the

minimum points that would have been rewarded (one point for each of the two

descriptors, for a total of two points). Clinical judgment scores obtained in this study

were considered beginning (9), developing (10-18), accomplished (19-27) or exemplary

(28-36). The Lasater Clinical Judgment Rubric Scoring Sheet (LCJRSS) was used to

record total and subscale clinical judgment scores (Cato et al., 2009) (See Appendix B).

The LCJRSS was modified with permission to exclude the Reflecting components and

transposed into Qualtrics to securely store data. The LCJRSS is attached to the following

link: LCJRSS

The third dependent variable in this study is unintended conceptual cues.

Unintended conceptual cues were operationalized as a count of the number of cues that

are not planned and purposefully integrated into the simulation scenario(s), including

exaggerated clinical presentation progression or patient comments (by the simulation

operator) to draw attention to the patient condition or otherwise allow the learner to

progress in the scenario.

55
The I-VRS

The I-VRS was developed in accordance with the PELS-CJI. It was developed to

promote active engagement (concrete experience) involving written clinical judgment

identification (reflective observation), and comparison and exploration with an expert

role model (active conceptualization) in a VRS. The Rose Smith simulation scenario

(one of three parallel simulation scenarios) was randomly selected to create the expert

nurse VRS and involved a pediatric client with pneumonia and a surgical site infection.

The simulation used for the expert nurse VRS was not used as a scenario for student

simulations during the study. The expert nurse in the video was a simulation teaching

assistant with > 5 years of experience as a registered nurse and three years of simulation

teaching experience. The expert nurse was provided a script for the simulation

encounter. Key points of care where clinical reasoning informs clinical judgments were

identified in the video. At these key points, PlayPosit software was used to intentionally

stop the video and prompt learner responses to actively engage participants in the I-VRS.

The prompts asked participants to denote their thoughts to the correlating descriptors of

the components of clinical judgment (noticing, interpreting, responding, and reflecting).

Once the participant entered their responses, the video resumed and the nurse explained

their thoughts as they pertained to the correlating descriptors of the components of

clinical judgment (noticing, interpreting, responding, and reflecting). After the expert

nurse explained their thoughts and rational, the simulation continued until the next point

of care where a clinical judgment is made, and the process occurred again until the

completion of the simulation encounter.

56
Research Questions and Nulls

This study sought to collect data pertaining to three questions. The first research

question was:

1) Is there a difference in total clinical judgment among prelicensure nursing

students engaged in simulation when differentiated by use of a scaffold during

pre-brief or no scaffold?

The null hypothesis was: There will be no significant difference total clinical

judgment among prelicensure nursing students engaged in simulation when

differentiated by use of a scaffold during pre-brief or no scaffold.

The second research question was:

2) Is there a difference in the clinical judgment components of noticing, interpreting,

and responding among prelicensure nursing students engaged in simulation when

differentiated by use of a scaffold during pre-brief or no scaffold?

The null hypothesis was: There will be no significant difference in the clinical

judgment components of noticing, interpreting, and responding among

prelicensure nursing students engaged in simulation when differentiated by use of

a scaffold during pre-brief or no scaffold.

The third research question was:

3) Is there a difference in the number of unintended conceptual cues provided during

simulation when differentiated by use of a scaffold or no scaffold during pre-

brief?

57
The null hypothesis was: There will be no significant difference in the number of

unintended conceptual cues provided during simulation when differentiated by

use of a scaffold or no scaffold during pre-brief.

Data Collection

Data collection procedures are an important component of quantitative research

(Polit & Tatano Beck, 2008). The intervention and data collection procedures were

executed as discussed in the following section.

Intervention Distribution and Pre-Briefing Activities

A teaching assistant used a code generator to randomly assign participants to

either the control (1) or treatment (2) group upon arrival at the simulation lab, and

oversaw the students enter their assigned group number in the demographic survey if they

chose to participate. The pre-brief components, including instructions for pre-brief

activities and the simulation orientation, were delivered in video format to ensure

consistent delivery of the intervention. At the beginning of each pre-brief video, the

simulation lab policy regarding confidentiality of the scenarios was presented to

minimize post simulation collaboration and event detail sharing. In adherence to the

INACSL standards, a fiction contract was also shared.

Treatment fidelity goals for the design of this study involved treatment dose

delivery consistency (Bellg et al., 2004). To assure an equivalent pre-briefing “dose” all

participants were allotted equal time during the pre-brief. Both the control and treatment

group received identical pre-brief instructions and access to the patient electronic chart,

the simulation scenario and objectives, and correlating readings from the assigned text.

The treatment group was additionally provided with the I-VRS, which took about 20

58
minutes to complete. The control group was instructed to review the correlating text

readings when the intervention group was completing the I-VRS. No less than 40

minutes were allowed for pre-briefing which was completed in the simulation lab.

Additional steps were taken to maintain treatment fidelity goals of this study

including training of the those involved in the study implementation and simulation

delivery. All simulation operators involved in the study were familiar with the simulation

cases which had been implemented several times during previous semesters. Simulation

operators followed scripted scenarios to ensure consistent simulation delivery.

Additionally, a training session (led by the researcher, a certified healthcare simulation

educator [CHSE]) occurred two weeks prior to the study implementation to ensure

standardization of simulation delivery. During the training session, the entire study

procedure was piloted including participant arrival, random assignment, pre-briefing

activities, and simulation delivery. Simulation operators took turns role playing the

student participants and the simulation operators multiple times. To avoid potential

influence of simulation cue delivery at the time of the study, the simulation operators

were blinded to the participant group assignment.

Simulation Scenarios

Three parallel simulations with different core scenarios but identical objectives

were developed by a team of expert nursing faculty and one simulation faculty who is a

CHSE. While not a focus of this dissertation study, the simulations were developed to

assess student competency based on the course objectives which align with the essential

core competencies for professional baccalaureate nursing education as determined by the

American Association of Colleges of Nursing (AACN). The simulations have undergone

59
evaluation and revision over a two-year period preceding this study. The three scenarios

included 1) Tiffany Smith – a four-year-old with failure to thrive and aspiration

pneumonia 2) Max Smith – an 18-month-old with neutropenia and community acquired

pneumonia and 3) Rose Smith – a three-year-old with a surgical site infection (shunt) and

hospital acquired pneumonia. The objectives of the simulations are:

• Perform focused assessment of two pertinent systems of concern

• Perform reassessment of two pertinent systems of concern

• Communicate using TeamSTEPPS SBAR to the provider

• Interpret a minimum of 3 labs or diagnostic data for case

• Demonstrate 3 pertinent nursing interventions

Two high fidelity Gaumard manikins were used for the simulation scenarios: one

pediatric Hal and one infant Hal. The manikins were operated by simulation lab staff that

were trained in the operation of simulation equipment and the INACSL standards.

All student participant simulation encounters were recorded with the CAE-

Learning Space video capture platform. The researcher observed videos of all student

participants completing their simulation. To allow for electronic data management the

LCJR score guide was transposed into Qualtrics Software. To measure the effect of the I-

VRS on conceptual cue recognition, the researcher reviewed all the simulation recordings

to count the number of unintended conceptual cues offered by the simulation operator.

The researcher counted all unintended conceptual cues, using the simulation scenario

script to differentiate between intended and unintended conceptual cues.

60
Data Analyses

Data was analyzed only on participants who completed the demographic survey

and consented to participate in the study. The researcher viewed all participant

simulation videos which were captured using CAE LearningSpace simulation

management software. Statistical analysis of data was conducted using IBM SPSS

(version 29). The research questions were tested using an alpha of .05.

Demographic Data

Demographic information was gathered to describe the sample population.

Demographic data collected included gender, ethnicity, race, age, program enrollment

(TBSN versus ABSN), and course enrollment information. The demographic survey is

attached to the following link: Demographic Survey

Descriptive Statistics

Descriptive statistics were analyzed to determine the nature of measures of central

tendency and dispersion of the data. A mean, range and standard deviation was used to

analyze age of participants. Modes were used to analyze gender, ethnic background, and

nursing program enrollment. Means and standard deviations were used to analyze

continuous data pertaining to clinical judgment scores. Mode was used to analyze

categorical data pertaining to the number of unintended cues provided.

Statistical Test – Research Question 1

The first research question was: 1) Is there a difference in total clinical judgment

among prelicensure nursing students engaged in simulation when differentiated by use of

a scaffold during pre-brief or no scaffold? Scoring was completed pertaining to total

clinical judgment scores (sans ‘reflecting’ component) using the LCJR. The primary

61
researcher was experienced in utilization of the LCJR and scoring guide. They were the

sole evaluator of clinical judgment for this study and were blinded to the study groups.

Reliability and validity of the LCJR has previously been addressed in Chapter II. To

ensure reliability of the LCJR scoring of the simulation used in this study, internal

consistency was assessed with Cronbach’s alpha.

This study tested for significant differences of total clinical judgment scores

between the control and intervention groups. Dependent upon assumptions being met,

either a parametric or non-parametric technique would be appropriate to test for

significant differences between two groups (Pallant, 2020). In this study, the dependent

variable (total clinical judgment score) is a continuous interval variable. Student

participants were randomly assigned to either control or intervention groups. Student

participants engaged in the scaffold during pre-brief and the simulation independently,

permitting independence of observations. Normal distribution of scores was assessed

with histogram analysis, and homogeneity of variance was identified with non-significant

Levene’s test (significance greater than .05) for equality of variances. Because the

assumptions for parametric techniques were met, an independent t-test was conducted.

Statistical Test – Research Question 2

The second research question was: Is there a difference in the clinical judgment

components of noticing, interpreting, and responding among prelicensure nursing

students engaged in simulation when differentiated by use of a scaffold during pre-brief

or no scaffold? Scoring was completed pertaining to subscale scores (noticing,

interpreting, and responding) of clinical judgment using the LCJR. To test the null

hypothesis that subscale scores of clinical judgment are independent of use of the I-VRS

62
scaffold during pre-brief, ordered metrics were assigned to each of the variables. No

scaffold use was coded as 1, and use of the I-VRS scaffold was coded as 2. Noticing,

interpreting and responding were coded as beginning (9), developing (10-18),

accomplished (19-27), and exemplary (28-36). Because there were more than two

ordered categories represented in the subscales results, an ordinal chi-square (also known

as linear-by-linear association test) was utilized to test for differences. The ordinal chi

square can be used to provide greater power by and can detect any type of pattern of

association between variables (Agresti, 2007).

Statistical Test – Research Question 3

The third research question was: Is there a difference in the number of unintended

conceptual cues provided during simulation when differentiated by use of a scaffold or no

scaffold during pre-brief? To test the null hypothesis that there is no difference in the

number of unintended conceptual cues provided during simulation when differentiated by

use of the I-VRS scaffold use or no scaffold during pre-brief, an ordinal chi square test

was conducted.

Power Calculations and Effect Size

In research, there is the potential to make a Type I or Type II error (Pallant,

2020). Type I errors occur when the null hypothesis is rejected when in fact it is true. To

reduce error and increase statistical power, several factors are considered including

sample size, effect size, and alpha level. Johnson et. al (2012) conducted a study that

involved similar design methods and interventions. In Johnson’s study, a video of an

expert role model who discussed their decision processes was shown to students during

simulation pre-briefing. Individual sub scores of clinical judgment (noticing,

63
interpreting, responding, and reflecting) were measured with the LCJR post simulation

and debriefing. Data was analyzed with Kruskal-Wallis tests. Based on a sample size of

94, a post-hoc power analysis (alpha = 0.05) determined that a sample size of 23

participants in each group (control and intervention) would produce similar results. A

large effect size (Cohen’s d > 1.13) was determined.

Another study conducted by Coram (2016) involved similar design methods and

interventions. In Coram’s study, a video of an expert role model was shown to students

prior to simulation, but the expert nurse’s decision-making process was provided to

participants in written form on a document (as opposed to verbal discussion). In Coram’s

study (n = 43), power calculations (0.97-0.99) and effect sizes (1.18-1.83) were deemed

adequate. Given the similarity in design, it was expected that the sample size of 67

would have adequate power for the proposed study.

Limitations

Though the design of this study was carefully constructed, some limitations were

present. One significant limitation was that the study was conducted at one college of

nursing, with only students enrolled in a BSN program. The single-site study design can

influence a low impact value (Duffy, Frenn, & Patterson, 2011). Another limitation

presents itself regarding the race/ethnicity of the study sample population. At the chosen

site for this pilot study, most nursing students are Caucasian females. The American

Association of Colleges of Nursing reported 40.8% of baccalaureate nursing students

identifying with a race/ethnicity other than Caucasian, and the accessible study

population is not representative of the minority nursing student population (AACN,

2022).

64
Summary

The purpose of this chapter was to describe the research plan used to answer the

research questions of this dissertation study. The research questions and null hypotheses

were presented, along with the study design, setting, selection of participants, protection

of human subjects, instruments used, research questions and null hypotheses, data

collection, and data analysis procedures. Chapter IV will present the study results.

65
Chapter IV: Results

Chapter IV includes the results of this dissertation study which examined the use

of a pre-brief scaffold to support undergraduate nursing students’ clinical judgment

development. The demographic data that describes the sample population is shared.

Descriptive statistics are presented to describe the nature of measures of central tendency

and dispersion of the data. Composite measures are presented regarding internal

consistency, reliability, and data distribution. Finally, the results from the research

question analyses are presented.

Demographic Data

The accessible sample of undergraduate nursing students for this study was 84.

The actual sample of undergraduate nursing students for this study was 67. Seventeen

students were excluded from the participant sample for various reasons (e.g., declined to

consent (n = 7), arrived late to simulation resulting in shortened pre-brief period (n = 1),

did not complete intervention in allotted time (n = 3), and technical difficulties during

simulation (n = 6).

Demographic data reflecting participant gender, ethnicity, race, age, program

enrollment (TBSN versus ABSN), and course enrollment information was collected.

Eighty-eight percent of participants (n = 59) identified as female, while 12% participants

(n = 8) identified as male. Regarding ethnicity, 85% of participants (n = 57) identified as

white, 7.5% (n = 5) identified as black or African American, and 7.5% (n = 5) identified

as Asian. Participants ranged in age from 21 years to 29 years, and the mean age was

22.45 (SD = 1.579). A total of 53.7% (n = 36) were enrolled in the traditional

baccalaureate of science in nursing program, and 46.3% (n = 31) were enrolled in the

66
accelerated baccalaureate of science in nursing program. While overwhelmingly female

and white, the distribution across groups was similar and representative of the Midwest

university’s enrollment in the college of nursing. See Table 1 for the grouped frequency

distribution of participants.

TABLE 1: Demographic Distribution of Study Participants

Age Identified Identified White Black or Asian Program


(mean as Male as (n) African (n) TBSN/
in (n) Female American ABSN
years) (n) (n)
Total 22.45 8 59 57 5 5 36/31

Control 22.53 6 30 30 3 3 21/15

Treatment 22.35 2 29 27 2 2 15/16

Thirty-six of the sample population were randomly assigned to the control group

and 31 participants of the sample population were randomly assigned to the treatment

group. Equal distribution of participants to simulation cases (Max or Tiffany) was

desired, though eight of the ten participants that were excluded had been assigned to the

Max Smith case, resulting in the following: of the 36 participants in the control group, 14

participated in the Max Smith case and 22 participated in the Tiffany Smith case. Of the

31 participants in the treatment group, 10 participated in the Max Smith case and 21

participated in the Tiffany Smith case. See Table 2.

TABLE 2: Mean Total Clinical Judgment by Case and Group


Control Treatment Control Treatment
Max Max Tiffany Tiffany
n = 14 n = 10 n = 22 n = 21

Mean CJ Score 26.50 31.20 24.14 27.14

Std. Deviation 4.202 3.736 5.759 5.304

67
Since the ‘reflecting’ category of clinical judgment was not scored in this study,

clinical judgment scores were adjusted to reflect total subscales of the three components

(noticing, interpreting, and responding). The resulting categories were beginning (9),

developing (10-18), accomplished (19-27) or exemplary (28-36). Students in the control

groups had mean clinical judgment scores aligning with the accomplished category of

clinical judgment, regardless of assigned case. Participants in the treatment group

assigned to the Max case had a mean clinical judgment score aligning with the exemplary

category of clinical judgment. Participants in the treatment group assigned to the Tiffany

case had a mean clinical judgment score that aligned in between the accomplished and

exemplary categories (see Table 2). When means for treatment and control groups

overall were calculated, the control group had mean clinical judgment scores aligning in

the accomplished category and the treatment group had mean clinical judgment scores

aligning with the exemplary category of clinical judgment. See Table 3.

TABLE 3: Total Clinical Judgment


Sample Size Mean CJ SD T Sig.
Score
Treatment 31 28.45 5.163 -2.653 .010

Control 36 25.45 5.275

Reliability of the LCJR

Scoring was completed by one evaluator for the total clinical judgment scores

(sans ‘reflecting’ component) using the LCJR. To assess for reliability of the LCJR,

internal consistency was analyzed using Cronbach’s alpha. The resulting Cronbach’s

alpha for this study was .932, indicating a strong level of internal consistency of the

LCJR (Polit & Tatano Beck, 2008).

68
Results of Research Questions

Each research question of the study is reviewed below. The analyses used and an

explanation of how assumptions were met is shared. Tests used to confirm normality are

also presented, and the results of the analyses are provided.

Research Question 1

The first research question was: Is there a difference in total clinical judgment

among prelicensure nursing students engaged in simulation when differentiated by use of

a scaffold during pre-brief or no scaffold? Parametric tests require a continuous interval

dependent variable, random assignment, independence of observations, and normal

distribution of scores (Pallant, 2020). The total clinical judgment score is a continuous

interval variable and participants were randomly assigned to either control or treatment

groups. The Levene test statistic was analyzed to test for equality of variance and was

.000 (corresponding p-value = .995), indicating homogeneity of variance was not

violated. The assumptions for parametric tests were met, and the independent t-test was

used to analyze for significant differences of clinical judgment scores between groups.

Sixty-seven undergraduate nursing students in their senior year were evaluated on

their clinical judgment in simulation. The results of the independent samples t-test

showed that participants who received the I-VRS scaffold during pre-brief had elevated

clinical judgment scores during simulation (n = 31, M = 28.45, SD = 5.163) as compared

to participants in the control group (n = 36, M = 25.06, SD = 5.275), t(65) = -2.653, p <

.01. See Table 3.

The Cohen’s d statistic was used to analyze the effect size of the total variance in

the dependent variable that is predictable from knowledge of the levels of the

69
independent variable and the resulting statistic was (-.650). A Cohen’s d of (-.650)

indicates a medium effect size (Polit & Tatano Beck, 2008).

Research Question 2

The second research question was: Is there a difference in the clinical judgment

components of noticing, interpreting, and responding among prelicensure nursing

students engaged in simulation when differentiated by use of a scaffold during pre-brief

or no scaffold? Noticing, interpreting and responding are subscale components of the

clinical judgment using the LCJR. Because there were three ordered categories

represented in the subscales results, an ordinal chi-square was used to test for differences

between groups. For the noticing subscale of clinical judgment, the resulting statistic was

X2 = 6.887, df = 1, p = .009. The interpreting subscale of clinical judgment resulted in a

X2 = 2.820, df = 1, p = .093. And finally for the responding subscale component of CJ a

X2 = 4.336, df = 1, p = .037 was found. The results support a statistically significant

relationship for both the noticing and responding subscale scores of clinical judgment and

the use of a scaffold during pre-brief, but not for the interpreting subscale component.

Research Question 3

The third research question was: Is there a difference in the number of unintended

conceptual cues (UCC) provided during simulation when differentiated by use of a

scaffold or no scaffold during pre-brief? An ordinal chi square test was used to test for

differences between treatment and control groups (X2 = 2.828, df = 1, p = .093), showing

no significant difference in the number of unintended conceptual cues provided between

groups (see Table 4).

70
TABLE 4: Variable Impact on Unintended Conceptual Cues
Variable Impact Chi Square df Asymptomatic
Statistic Significance (two-sided)
UCC*group 2.828 1 .093

When differentiated by use of a scaffold or no scaffold during pre-brief, a

difference in the number of UCC provided was anticipated, however no significant

difference was present. To see if other variables had an impact on the number of UCC

provided between participants in the treatment and control groups, further tests were

completed. An ordinal chi square test was conducted to assess for differences in the

number of UCC when accounting for the simulation case that was used, the simulation

operator, or the participants’ most recent clinical setting. The results similarly indicated

no significant differences in the number of UCC provided during simulation. Regardless

of variables at play, there appeared to be no association to the number of UCC provided

(see Table 5).

TABLE 5: Additional Variable Impact on Unintended Conceptual Cues


Variable Impact Chi Square Df Asymptomatic
Statistic Significance (two-
sided)
UCC*case 1.358 1 .244

UCC*sim op 1.993 1 .164

UCC*clinic setting 0.13 1 .908

Summary of Key Findings

Given the gathered demographic data, it is apparent that most participants in this

study identified as white females with a mean age in their early twenties. Based on the

statistical analysis of data, we reject the null hypothesis for research question one, and

conclude there is a difference in total clinical judgment among prelicensure nursing

71
students engaged in simulation when differentiated by use of a scaffold during pre-brief

or no scaffold. When examining the individual components underlying CJ, there were

differences in the clinical judgment components of noticing and responding, but not in

the component of interpreting. Finally, the I-VRS did not appear to support a difference

in the number of unintended conceptual cues provided during simulation in this study.

The following and final chapter will present a discussion of these results in context with

current literature along with recommendations for future research, limitations and the

implications of this study.

72
Chapter V: Discussion and Recommendations

Chapter V presents a discussion of findings in context of current literature relating

to development of total clinical judgment, individual components of clinical judgment,

and clinical judgment independence. Additional discussions relating to the PELS-CJI

framework, the concept of time and clinical judgment development, and the use of the

LCJR to evaluate clinical judgment are presented. The implications and limitations of

this dissertation research study are also explored, along with recommendations for future

research. A review of the current study including the problem, the purpose, and research

questions are presented as a preface to the rest of this chapter.

Review of Current Study

A concerning problem in healthcare is that most new graduate nurses do not

present with the clinical judgment skills necessary to provide safe care (Kavanagh &

Sharpnack, 2021). Nurse faculty must ensure education experiences prepare nursing

students with necessary skills, including safe clinical judgment. Simulation is a learning

strategy that has shown promise in supporting clinical judgment development. While

research on debriefing to improve clinical judgment is abundant, research pertaining to

pre-briefing in simulation is understudied.

Problem

Undergraduate nursing students’ need to be better prepared for complex patient

care upon graduation. Simulation to support clinical judgment development is a common

practice in nursing education, and the debriefing component of simulation has been

heavily researched. However, there is a paucity of literature on pre-brief scaffolds to

support clinical judgment development.

73
Purpose

The purpose of this study was to examine the use of a pre-brief scaffold to support

undergraduate nursing students’ clinical judgment development and clinical judgment

independence. An experimental study was conducted to measure clinical judgment and

clinical judgment independence. Participants in this study completed a pre-brief with or

without a researcher developed scaffold, the I-VRS. Clinical judgment in simulation was

measured with the LCJR, and the number of UCC during simulation were counted to

measure clinical judgment independence.

Research Questions and Findings

There were three research questions. The first research question was 1) Is there a

difference in total clinical judgment among prelicensure nursing students engaged in

simulation when differentiated by use of a scaffold during pre-brief or no scaffold? The

results of this study supported that the use of the I-VRS prior to simulation demonstrated

elevated mean total clinical judgment scores between control and intervention groups.

The second research question was 2) Is there a difference in the clinical judgment

components of noticing, interpreting, and responding among prelicensure nursing

students engaged in simulation when differentiated by use of a scaffold during pre-brief

or no scaffold? The results of this study showed a statistically significant positive

relationship between the I-VRS and both the noticing and responding subscale scores of

clinical judgment, but not for the interpreting subscale component. The third research

question was 3) Is there a difference in the number of UCC provided during simulation

when differentiated by use of a scaffold or no scaffold during pre-brief? In this study

there here was no statistically significant difference in the number of UCC provided

74
during simulation when differentiated using the I-VRS prior to simulation. A discussion

of these findings is presented next.

Discussion of Findings

A discussion of findings as they relate to the effect of the I-VRS during pre-brief

on total clinical judgment, individual components of clinical judgment, and cueing and

clinical judgment independence is explored. The findings are examined in context of

current literature. Further discussion on the use of the PELS-CJI framework to guide pre-

brief scaffold development, the concept of time and clinical judgment development, as

well as the use of LCJR instrument to measure clinical judgment is presented.

Effect of the I-VRS Scaffold on Total Clinical Judgment

New graduate nurses are not currently equipped with clinical judgment skills

necessary for safe practice, and the limited literature reporting the effect of pre-brief

scaffolds on total clinical judgment development shows inconsistent interventions and

outcome measures. Only five studies were found in the literature which studied pre-brief

scaffolds’ effect on clinical judgment (Coram, 2016; Dailey et al. 2017; Johnson et al.

2012; Page-Cutrara & Turk, 2017; Sharoff, 2015). The scaffolds in these studies were

varied and involved concept mapping, pathophysiology review, images, videos, handouts,

and VRS. Additionally, these studies were inconsistent in their measures to evaluate

clinical judgment, utilizing the CCEI, coding methods, reflection evaluation, and the

LCJR. The varying scaffolds and outcome measures make it difficult to draw evidence-

based conclusions from existing data to inform best practices in scaffold use in pre-brief

to develop clinical judgment. There are two reports by Coram (2016) and Johnson et al.

(2012) that used a similar scaffold (a VRS) and evaluation measure (LCJR).

75
This dissertation study adds to the small body of existing evidence of using a VRS

during pre-brief to support clinical judgment development. The results of this study

suggest that the I-VRS pre-brief scaffold supports overall clinical judgment development

during simulation. These findings are consistent with those from research conducted by

Johnson et al. (2012) and Coram (2016) where using a VRS during simulation pre-brief

also demonstrated a significant difference in clinical judgment between intervention and

control groups. In the current study, mean clinical judgment scores for those in the

control group demonstrated accomplished clinical judgment, whereas those in the

treatment group demonstrated exemplary clinical judgment. Coram found similar results,

though the clinical judgment scores indicated a beginning clinical judgment rating for

those in the control group and a developing clinical judgment rating for those in the

treatment group. Though the overall ratings between this study and Coram’s study varied

(accomplish/exemplary and beginning/developing), it is not unexpected since the

participants in Coram’s study were in their first medical surgical course, compared to the

participants in this study who were in their senior year in a nursing program. Recalling

that time to develop clinical judgment is a common theme in the literature, the fact that

the participants in this study have heightened clinical judgment compared to the

participants in Coram’s study is congruent with their nursing curriculum placement

(Benner, 1982; Cioffi, 2000; Tanner, 2006).

Effects of the I-VRS on Individual Components of Clinical Judgment

Participants in this study demonstrated increased clinical judgment relating to

noticing and responding components of clinical judgment, but not to the interpreting

component of clinical judgment. Johnson et. al (2012) assessed significant differences in

76
mean scores between treatment and control groups for all three components of clinical

judgment, noticing, responding and interpreting when using a VRS in simulation pre-

brief. Their study design included recorded video analysis of the simulation and

debriefing, allowing for the participants’ thought process to be examined. The lack of

thought process explanation in this study may explain the different results of the

interpreting component of clinical judgment between this study and Johnson’s study.

With few expressions of thought process, it should be considered that the interpreting

component of clinical judgment may be better examined following analysis of debrief

instead of during the simulation.

The addition of the evidence from the current study when considered with

Coram’s and Johnson’s findings support using a VRS with an expert role model during

pre-brief to enhance overall clinical judgment, and the specific components of clinical

judgment, noticing and responding. Though the pre-brief intervention varied slightly in

all three studies, a common denominator in Johnson’s study, Coram’s study, and this

study is that AL principles were applied with an expert nurse in a VRS before the

simulation experience. The expert nurse’s thought processes leading to clinical

judgments were explained to the viewer. In other words, how the expert nurse clinically

reasoned through the information presented in a clinical scenario, and how their clinical

reasoning led to clinical judgments, was shared. How the expert nurse shared their

thought processes (i.e., their clinical reasoning) varied slightly between studies. Whether

via verbal explanation (Johnson) during the VRS, a written supplement provided to the

student while observing the VRS (Coram), or I-VRS with PlayPosit (in this dissertation

77
study), the use of a VRS with expert nurse clinical reasoning explanations supports

increased clinical judgment during simulation.

Effects of the I-VRS on Cueing and Clinical Judgment Independence

While there was no significant difference between groups regarding the number

of UCC provided by the simulation operator, this finding holds value. Decreased UCC

during simulation was intended to provide evidence relating to independent clinical

judgment, however, there is not enough evidence in the literature about cueing to

interpret a similar cue count between groups negatively. The literature that is available

on cueing in simulation largely reflects conceptually defining and understanding the term

itself (Jeffries, 2005; Alessi, 2000; Adams et al., 2008; Dieckmann et al., 2010). A

scoping review of cueing in nursing simulation revealed missed cue recognition as the

most prominent theme amongst studies on cueing (Poledna, et al., 2022). Cueing’s

impact on learning and other outcomes remains understudied in nursing simulation.

In clinical practice, early cue recognition is a critical skill of expert nursing care,

and the action of gathering cues is the first step to formulating clinical judgments

(Benner, Tanner & Chesla, 2009; Hammond et al. in Burbach and Thomson, 2014). One

prominent differentiator between novice nursing students’ and expert nurses’ cue

recognition is that the novice holds equal value to cues and looks at them in a sequential

manner. Such interpretation can lead to missed priorities in providing care. It is possible

that participants in the treatment group in this study, though requiring the same amount of

cues as participants in the control group, were better at recognizing cues and

differentiating their importance. In other words, they may have noticed the same amount

of cues but could more appropriately interpret their importance and respond accordingly.

78
To examine this notion further, we explored if there were differences in the number of

cues provided and other variables (simulation operator, simulation scenario, and most

recent clinical site). No significant relationships were found, eliminating competing

variables that could explain the difference in clinical judgment between participants in

the treatment and control groups, and supporting the thought that those who engaged with

the I-VRS were more able to independently recognize conceptual cues. With total

clinical judgment improved among participants in the treatment group in this study,

additional research on cueing is necessary before discounting the use of the PELS-CJI

framework to inform simulation pre-briefing interventions to increase independent

clinical judgment.

PELS-CJI and the I-VRS Scaffold

With no apparent frameworks to guide pre-briefing to enhance clinical judgment

available in the literature, the PELS-CJI framework was developed to inform the I-VRS

pre-briefing activity. Jeffries (2015) calls for pre-brief methods be researched for

effectiveness, and effective research must be supported with theory (Polit & Tatano Beck,

2008). While research has demonstrated that applying a model of experiential learning

resulted in enhanced clinical judgment, the majority of simulation studies fail to

recognize theoretical frameworks used to inform simulation design (Chmil et al., 2015;

Mariani, Fey & Gloe, 2018). The PELS-CJI offers a potential framework to inform the

pre-brief component of simulations to enhance nursing student clinical judgment. The

PELS-CJI framework was developed with educational and nursing theoretical

perspectives. It supports knowledge discovery and integration with immediate

subsequent transfer and application of learned concepts during simulation. The results of

79
this study showed more advanced clinical judgment in the treatment group. This finding

supports the need to further test the PELS-CJI to inform clinical judgment development.

This study also highlights the use of scaffolding in simulation pre-brief. In

Chapter II, the concept of educational scaffolds was discussed. Critical elements of

scaffold design include individualized tailoring to learner needs with the goal being to

assist the learner in becoming independent in conceptual application (Coombs, 2018).

While the I-VRS shared in this study was identical for each participant, the interactive

features of the PlayPosit provided participants with options to go at their own pace, to

revisit confusing concepts, and to respond to and reflect as much or as little as they

preferred, creating a tailored experience. There were three participants in this study who

were not able to complete the I-VRS activity in the allotted time. Extended time should

be offered in future I-VRS application to provide more individualized support.

In addition to individualizing scaffolds, well designed scaffolds will lead learners

to a place of independence where it is no longer needed (Boblett, 2012). Participants in

this study were presented the I-VRS scaffold during pre-brief, and then had the

opportunity to immediately apply clinical reasoning processes in a subsequent simulation

without the scaffold present. In other words, the I-VRS in pre-brief supported learners in

moving beyond their current ability, through their ZPD, to a place of heightened potential

development in clinical reasoning and clinical judgment during the simulation. These

results support using technology, such as PlayPosit, to create individualized scaffolds to

promote independent clinical judgment in a subsequent simulation. Whether the I-VRS

would continue to support heightened clinical judgment in future simulations or clinical

experiences remains undetermined.

80
Time to Develop Clinical Judgment

A discussion of the concept of time in relation to the effects of the I-VRS on

clinical judgment development is also warranted. The results of this study showed

increased clinical judgment for participants who received the I-VRS during pre-brief just

prior to the simulation experience. In other words, a relatively small amount of time

(about 20 minutes to complete the I-VRS intervention) appeared to support clinical

judgment growth.

But clinical judgment is not learned in a linear fashion in nursing curriculum, and

the learner’s subject matter knowledge needs to be considered. Existing nursing

education literature highlights clinical judgment development progression over the course

of a semester or entire curriculum (Blum et al., 2010; Bussard, 2018; Chmil et al., 2015;

Leijser & Spek, 2021). In addition to time to develop pertinent subject matter

knowledge, enhanced clinical judgment is affiliated with increased simulation

experiences (Bussard, 2018; Fawaz & Hamdan-Mansour, 2016; Victor et al., 2017).

While literature describes time and experience as necessary factors to develop expert

clinical judgment, the results of this study support enhanced clinical judgment almost

immediately following the intervention. By repeating parallel simulations in pre-brief

and the actual simulation experience, more effective thinking during the simulation

(clinical reasoning) was likely a result of increased cognitive connections of theoretical

and sociocultural learned concepts by engaging with the expert nurse in the I-VRS

(National Academies of Sciences, Engineering, and Medicine, 2018). It can be posited

that other I-VRS’s (that correlate with the learner’s current subject matter awareness)

would lead to increased clinical judgment in simulation.

81
The LCJR to Measure Clinical Judgment

Because clinical judgment involves the individuals’ prior experiences and other

factors, it is important to consider the LCJR is not actually intended to measure clinical

judgment (Tanner, 2006; Lasater, personal communication, March 12th, 2023). Rather, it

is designed to “describe the trajectory of students’ clinical judgment” over time (Lasater,

personal communication, March 12th, 2023). However, participants in the intervention

group had the opportunity to compare their personal beliefs and prior experiences to the

expert nurse’s thinking in the I-VRS, which may have corrected any misunderstood

concepts previously learned. Since the LCJR was used for this study to assess the impact

of an intervention in just one simulation session as a measure of clinical judgment

between groups, it can be suggested that the LCJR did offer a measure of clinical

judgment at one point in time.

Using the LCJR to measure differences in clinical judgment components,

participants in this study demonstrated increased clinical judgment relating to noticing

and responding, but not interpreting. The researcher encountered difficulty evaluating

the interpreting component of clinical judgment as the LCJR defines interpreting in ways

that proved complicated to assess without understanding the participants’ thought

processes. While a method referred to as ‘thinking out loud’ has been used to assess

students’ clinical reasoning processes in simulation, thinking out loud can decrease the

authenticity of the simulation environment and be difficult for students when they are

already being challenged with complex simulation encounters (Burbach et al., 2015). So

as not to impact the fidelity or add more complexity to the students’ simulation

experience, participants in this study were not encouraged to ‘think out loud’.

82
Additionally, the simulation debrief was not recorded for evaluation. To better examine

the effect of the I-VRS on the ‘interpreting’ and ‘reflecting’ components of clinical

judgment, further analysis involving the debrief session is warranted.

Implications

With simulation being a common teaching strategy in most nursing programs,

nurse educators can develop and research additional I-VRS (and other pre-brief scaffolds)

informed by the PELS-CJI framework. Other frameworks to inform simulation pre-brief

scaffold development are not apparent in the literature. With additional evidence

demonstrating favorable outcomes regarding increased clinical judgment, the PELS-CJI

framework has the potential to become a scientifically supported framework to guide

simulation pre-brief content development. Wide use of a consistent, effective framework

to inform pre-brief intervention design would add to the rigor and value on pre-brief

research (Polit & Tatano Beck, 2008).

When simulation objectives involve clinical reasoning and clinical judgment

development, the I-VRS could be tailored to the level of learner development as they

move through nursing curriculum. By graduation, we might expect student nurses to

display exemplary clinical judgment skills in simulation without a I-VRS in pre-brief.

Providing double the experiential learning when followed by simulation, with additional

I-VRS pre-brief activities we might expect collectively more advanced clinical judgment

skills from students at the completion of their nursing curriculum. Recalling that time

and experience are contributing factors to clinical judgment development, using I-VRS

consistently during pre-brief throughout nursing curriculum may support increased

preparedness for practice at graduation. Consistent with prior research, an I-VRS before

83
simulation could be routinely implemented as a standard of practice during simulation

pre-brief to increase clinical judgment if further validated by additional research

involving varying levels in the curriculum, varying scenarios, etc.

Limitations

Though carefully designed to meet research standards, this study does present

with limitations. One limitation is that the I-VRS was one pre-brief scaffold made for

one simulation event. Additionally, the study was conducted at one point in time and

therefore does not demonstrate sustained clinical judgment growth. The degree of

retention and application of clinical judgment to different clinical scenarios has not been

determined. It is also not clear if clinical judgment independence is able to be determined

by cue counting. Additionally, the interpreting component of clinical judgment was

challenging to measure in this study since the participant’s thought processes were not

apparent during the simulation, and debriefing analysis was not conducted. Thus, the

results may not be an accurate reflection regarding the interpreting component of clinical

judgment. Likewise, the reflecting component of clinical judgment was not measured in

this study. Further, the sample population used in this study involved one level of

learners at one college of nursing which limits the generalizability of findings to other

groups.

Recommendations

The limitations recognized in this study elicit several recommendations for future

research. The simulation event in this study did involve two separate simulation

scenarios, however, just two additional studies support the use of a VRS during pre-brief.

I-VRS pre-brief scaffolds involving different clinical cases, samples and populations

84
must be developed and tested to add to the validity of these findings. Additionally, the

effect of the I-VRS on clinical judgment was only measured at one point immediately

following the intervention, with one cohort of learners at one institution. Future research

using an I-VRS during pre-brief is necessary to determine if improvement in clinical

judgment is retained and transferrable to the clinical setting and if using an I-VRS would

be effective in additional populations of learners.

Other research should also include approaches to assess the interpreting and

reflecting components of clinical judgment. To obtain data regarding the effect of the I-

VRS on interpreting and reflecting components of clinical judgment using the LCJR, a

qualitative observation study of post-simulation debrief should be considered. While the

goal of a well-designed scaffold is to promote independence, the method used to measure

independence in this study (cue counting) is a new concept. Future research should

continue to explore the relevance of unintended conceptual cues provided in relation to

clinical judgment independence.

Conclusion

With limited reports to-date of research studies that objectively assess the effects

of pre-briefing strategies, this study provided empirical evidence on the use of an I-VRS,

informed by the PELS-CJI, as a pre-brief scaffold to support clinical judgment

development. This dissertation provided an overview of the problem regarding

underdeveloped clinical judgment amongst new nurses and the impact on patient safety,

followed by a literature review regarding clinical judgment development in

undergraduate nursing curriculum. The methods of the dissertation research study were

detailed, along with the results which showed an almost immediate clinical judgment

85
enhancement between control and intervention groups when an I-VRS was implemented

as a pre-brief scaffold.

This research addresses an issue in nursing education requiring imminent

attention to ensure new nurse graduates are better prepared with clinical judgment skills

prior to graduation, and highlights the important charge of nurse educators to support

nursing students in expedient clinical judgment development. This study showed that

using an I-VRS in simulation pre-brief supported enhanced clinical judgment in

simulation. The use of the I-VRS adds to the existing limited body of evidence related to

pre-brief design to support clinical judgment development among undergraduate nursing

students.

86
Appendix A

Lasater Clinical Judgment Rubric

87
Note. This rubric was produced by Lasater in 2007. From “Clinical Judgment

Development: Using a Simulation to Create an Assessment Rubric,” by K. Lasater, 2007,

Journal of Nursing Education, 46(11), pp. 500-501. Copyright 2005 by Kathie Lasater,

EdD, RN.

88
Appendix B

Lasater Clinical Judgment Scoring Sheet

89
References

Adams, W. K., Reid, S., LeMaster, R., McKagan, S. B., Perkins, K. K., Dubson, M., &

Wieman, C. E. (2008). A study of educational simulations part I-Engagement and

learning. Journal of Interactive Learning Research, 19(3), 397-419. Retrieved

September 2, 2023 from https://www.learntechlib.org/p/24230

Adamson K. A. (2011). Assessing the reliability of simulation evaluation instruments

used in nursing education: A test of concept study (Doctoral dissertation).

Available from ProQuest Dissertations and Theses database. (UMI No. 3460357)

Adamson, K. (2016). Rater Bias in Simulation Performance Assessment: Examining the

Effect of Participant Race/Ethnicity. Nursing Education Perspectives (National

League for Nursing), 37(2), 78–82.

Adamson, K. A., Gubrud, P., Sideras, S., & Lasater, K. (2011). Assessing the reliability,

validity, and use of the Lasater clinical judgment rubric: Three approaches.

Journal of Nursing Education, 51(2), 66-73. https://doi.org/10.3928/01484834-

20111130-03

Adamson, K. A., & Kardong-Edgren, S. (2012). A method and resources for assessing

the reliability of simulation evaluation instruments. Nursing Education

Perspectives, 33(5), 334-339. https://doi.org/10.5480/1536-5026-33.5.334

Agresti, A. (2007). Introduction to Categorical Data Analysis. 2nd edition. Hoboken, NJ:

John Wiley & Sons, Inc.

Alessi, S. (2000). Designing education support in system-dynamics-based interactive

learning environments. Simulation and Gaming, 31(2).

https://doi.org/10.1177/104687810003100205

90
American Association of Colleges of Nursing. (2022, September). Enhancing Diversity

in the Nursing Workforce. [Fact Sheet] https://www.aacnnursing.org/news-

information/fact-sheets/enhancing-diversity

American Association of Colleges of Nursing. (2022, September). Nursing Faculty

Shortage. [Fact Sheet] https://www.aacnnursing.org/news-information/fact-

sheets/nursing-faculty-shortage

American Association of Colleges of Nursing. (1998). The essentials of baccalaureate

education for professional nursing practice. Washington, DC. Retrieved from

https://files.eric.ed.gov/fulltext/ED433735.pdf

Anderson, M., Guido-Sanz, F., Talbert, S., Blackwell, C. W., Dial, M., McMahan, R. P.,

& Díaz, D. A. (2022). Augmented Reality (AR) as a Prebrief for Acute Care

Simulation. Clinical Simulation in Nursing, 69, 40–48. CINAHL Complete.

https://doi.org/10.1016/j.ecns.2022.05.005

Andrea, J., & Kotowskit, P. (2017, July). Using standardized patients in an undergraduate

nursing health assessment class. Clinical Simulation in Nursing, 13(7), 309-313.

https://doi.org/10.1016/j.ecns.2017.05.003

Ashcraft, A. S., Opton, L., Bridges, R. A., Caballero, S., Veesart, A., & Weaver, C.

(2013). Simulation Evaluation Using a Modified Lasater Clinical Judgment

Rubric. Nursing Education Perspectives (National League for Nursing), 34(2),

122–126.

Ashley, J., & Stamp, K. (2014). Learning to think like a nurse: The development of

clinical judgment in nursing students. Journal of Nursing Education, 53(9), 519-

525. 10.3928/01484834-20140821-14

91
Ayed, A., Khalaf, I. A., Fashafsheh, I., Saleh, A., Bawadi, H., Abuidhail, J., Thultheen, I.,

& Joudallah, H. (2022, March 13). Effect of high-fidelity simulation on clinical

judgment among nursing students. Inquiry, 59 1-6.

https://doi.org/10.1177/00469580221081997

Bambini, D., Washburn, J., & Perkins, R. (2009). Outcomes of clinical simulation for

novice nursing students: Communication, confidence, clinical judgment. Nursing

Education Perspectives, (30)2, 79-82.

https://pubmed.ncbi.nlm.nih.gov/19476069/

Bang, M. & Medin, D. (2010). Cultural processes in science education: Supporting the

navigation of multiple epistemologies. Science Education, 94(6).

https://doi.org/10.1002/sce.20392

Barnard, R., de Luca, R., & Li, J. (2014, April 8). First-year undergraduate students’

perception of lecturer and peer feedback: A New Zealand action research project.

Studies in Higher Education, 40(5), 933-944.

https://doi.org/10.1080/03075079.2014.881343

Bates, T. A., Moore, L. C., Green, D., & Cranford, J. S. (2019). Comparing outcomes of

active student and observer roles in nursing simulation. Nurse Educator, 44(4)

216-221. https://doi.org/10.1097/NNE.0000000000000603

Belland, B. (2014). Scaffolding: Definition, current debates and future directions. In J.M.

Spector et al. (eds.), Handbook of research on educational communications and

technology. (pp. 505-518). Springer Science+Business Media. DOI 10.1007/978-

1-4614-3185-5_39

92
Bellg A.J., Borrelli B., Resnick B., Hecht J., Minicucci D.S., Ory M., Ogedegbe G.,

Orwig D., Ernst D., Czajkowski S.; Treatment Fidelity Workgroup of the NIH

Behavior Change Consortium. (2004). Enhancing treatment fidelity in health

behavior change studies: best practices and recommendations from the NIH

Behavior Change Consortium. Health Psychol, 23(5). doi: 10.1037/0278-

6133.23.5.443. PMID: 15367063.

Beman, S. B. (2017, December). Evaluation of student competence in simulation

following a prebriefing activity: A pilot study [Doctoral dissertation, University

of Wisconsin-Milwakee]. University of Wisconsin-Milwakee Digital Archive.

https://dc.uwm.edu/etd/1585/

Benner, P., Sutphen, M., Leonard, V. & Day, L. (2010). Educating nurses: A call for

radical transformation. San Francisco, CA: Jossey-Bass.

Benner, P. (1982). From novice to expert. American Journal of Nursing, 82(3), 402-407.

https://doi.org/10.1097/00000446-198282030-00004

Benner, P. (2000). From novice to expert. Upper Saddle River, NJ: Pearson.

Benner, P., Tanner, C.A., & Chesla, A. (2009). Expertise in nursing practice: Caring,

clinical judgment & ethics. Springer Publishing Company, New York.

Berndt, J., Dinndorf-Hogenson, G., Herheim, R., Hoover, C., Lang, N., Neuwirth, J., &

Tollefson, B. (2015). Collaborative classroom simulation: An innovative

pedagogy using simulation in nursing education. Nursing Education perspectives,

36(6), 401-402. https://doi.org/10.5480/14-1420

Betts, J., Muntean, W., Kim, D., Jorion, N., & Dickison, P. (2019). Building a method for

writing clinical judgment items for entry-level nursing exams. Journal of Applied

93
Testing Technology, 20(2) 21-36. https://www.ncsbn.org/public-

files/Building_a_Method_for_Writing_Clinical_Judgment_It.pdf

Biggs, A. T., Pistone, D., Riggenbach, M., Hamilton, J. A., & Blacker, K. J. (2021,

September). How unintentional cues can bias threat assessments during

shoot/don’t-shoot simulations. Applied Ergonomics. 95.

https://doi.org/10.1016/j.apergo.2021.1034511

Bliss, J. Askew, M., & Macrae, S. (1996). Effective teaching and learning: Scaffolding

revisited. Oxford Review of Education, 22(1), 37-61.

https://doi.org/10.1080/0305498960220103

Blum, C. A., Borglund, S., & Parcells, D. (2010). High-fidelity nursing simulation:

Impact on student self-confidence and clinical competence. International Journal

of Nursing Education Scholarship, 7(1). https://doi.org/10.2202/1548-923X.2035

Blum, C. A., & Parcells, D. A. (2012). Relationship between high-fidelity simulation and

patient safety in prelicensure nursing education: A comprehensive review. The

Journal of Nursing Education, 51(8), 429–435. https://doi.org/10.3928/01484834-

20120523-01

Boblett, N. (2012). Scaffolding: Defining the Metaphor.

https://doi.org/10.7916/D84Q86KN

Bradenburg, R. (2021). Enacting a pedagogy of reflection in initial teacher education

using critical incident identification and examination: A self-study of practice.

Reflective Practice, 22(1), 16-31.

https://doi.org/10.1080/14623943.2020.1821626

94
Brennan, B. A. (2022). The impact of self-efficacy based on prebriefing on nursing

student clinical competency and self-efficacy in simulation: An experimental

study. Nurse Education Today, 109. https://doi.org/10.1016/j.nedt.2021.105260

Brown, D., & Chronister, C. (2009). The effect of simulation learning on critical thinking

and self-confidence when incorporated into an electrocardiogram nursing course.

Clinical Simulation in Nursing, 5(1), 45-52.

https://doi.org/10.1016/j.ecns.2008.11.001

Burbach, B. E., & Thompson, S. A. (2014). Cue recognition by undergraduate nursing

students: An integrative review. Journal of Nursing Education, 53, 73-81.

https://doi.org/10.3928/01484834-20140806-07

Burbach, B. E., Barnason, S., Thomson, S. A. (2015). Using ‘think aloud’ to capture

clinical reasoning during patient simulation. International Journal of Nursing

Education Scholarship, 12(1), 1-7. DOI: 10.1515/ijnes-2014-0044

Bussard, M. E. (2018). Evaluation if clinical judgment in prelicensure nursing students.

Nurse Educator, 43(2), 106-108. https://doi.org/10.1097/nne.0000000000000432

Candela, L., Dalley, K., & Benzel-Lindley, J. (2006). A case for learning-centered

curricula. Journal of Nursing Education, 45(2), 59-66.

https://doi.org/10.3928/01484834-20060201-04

Cazzell, M., & Anderson, M. (2016). The impact of critical thinking on clinical judgment

during simulation with senior nursing students. Nursing Education Perspectives,

37(2), 83-90. https://pubmed.ncbi.nlm.nih.gov/27209866/

Chamberlain, J. (2017). The impact of simulation prebriefing on perceptions of overall

effectiveness, learning, and self-confidence in nursing students. Nursing

95
Education Perspectives, 38(3), 119-125.

https://doi.org/10.1097/01.NEP.0000000000000135

Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement

to active learning outcomes. Educational Psychologist, 49(4), 219–243. DOI:

10.1080/00461520.2014.965823

Chmil, J. V., Turk, M., Adamson, K., & Larew, C. (2015). Effects of an experiential

learning simulation design on clinical nursing judgment development. Nurse

Educator, 40(5), 228-232. https://doi.org/10.1097/NNE.0000000000000159

Cioffi, J. (2000). Nurses’ experiences of making decisions to call emergency assistance to

their patients. Journal of Advanced Nursing, 32(1), 108-114.

https://doi.org/10.1046/j.1365-2648.2000.01414.x

Clapper, T. C. (2015, March 23). Cooperative-based learning and the zone of proximal

development. Simulation & Gaming, 46(2), 148-158.

https://doi.org/10.1177/1046878115569044

Clapper, T. C. (2010, January). Beyond Knowles: What those conducting simulation need

to know about adult learning theory, Clinical Simulation in Nursing, 6(1), 7-14.

https://doi.org/10.1016/j.ecns.2009.07.003

Coombs, N. M. (2018). Educational scaffolding: Back to basics for nursing education in

the 21st century. Nurse Education Today, 68, 198-200.

https://doi.org/10.1016/j.nedt.2018.06.007

Coram, C. (2016) Expert role modeling effect on novice nursing students’ clinical

judgment. Clinical Simulation in Nursing, 12(9), 385-391.

https://doi.org/10.1016/j.ecns.2016.04.009

96
Costello, M. (2017). The benefits of active learning: Applying Brunner’s discovery

theory to the classroom: Teaching clinical decision-making to senior nursing

students. Teaching and learning in Nursing, 12(3), 212-213.

Cress, U., & Kimmerle, J. (2018). Collective Knowledge Construction. In F. Fischer, C.

E. Hmelo-Silver, S. R. Goldman, & P. Reimann (Eds.), International Handbook

of the Learning Sciences (1st ed., pp. 137–146). Routledge.

https://doi.org/10.4324/9781315617572-14

Daley, B. J., Beman, S. B., Morgan, S., Kennedy, L., & Sheriff, M. (2017). Concept

maps: A tool to prepare for high fidelity simulation in nursing. Journal of the

Scholarship of Teaching and Learning, 17(4), 17-30

https://doi.org/10.14434/josotl.v17i4.21668

Danish, J. & Gresalfi, M. (2018). Cognitive and sociocultural perspectives on learning:

Tensions and synergy in the learning sciences. In Fisher, F. et al. (Eds.),

International handbook of the learning sciences (pp. 34-43). New York, NY:

Routledge

Decker, S., Alinier, G., Crawford, S. B., Gordon, R. M., Jenkins, D., & Wilson, C.

(2021). Healthcare Simulation Standards of Best Practice: The Debriefing

Process. Clinical Simulation in Nursing, 58, 27–32.

https://doi.org/10.1016/j.ecns.2021.08.011

de Vries, P. (2005). Lessons from home: Scaffolding vocal improvisation and song

acquisition with a 2-year-old. Early Childhood Education Journal, 32(5), 307-

312. https://doi.org/10.1007/s10643-004-0962-2

97
Del Bueno, D. (2005). A CRISIS in critical thinking. Nursing Education Perspectives,

26(5), 278-282.

Devereux, L., & Wilson, K. (2008). Scaffolding literacies across the bachelor of

education program: An argument for a course-wide approach. Asia-Pacific

Journal of Teacher Education, 36(2), 121-134.

https://doi.org/10.1080/13598660801971633

Dickison, P., Haerling, K. A., & Lasater, K. (2019). Integrating the national council of

state boards of nursing clinical judgment model into nursing educational

frameworks. Journal of Nursing Education; Thorofare, 58(2), 72–78.

http://dx.doi.org.proxy.ulib.uits.iu.edu/10.3928/01484834-20190122-03

Dieckmann, P. Lippert, A., Glavin, R., Rall, M. (2010). When things do not go as

expected: Scenario life savers. Simulation in Healthcare: The Journal of the

Society for Simulation in Healthcare, 5(4). 219-225. DOI:

10.1097/SIH.0b013e3181e77f74

Dileone, C., Chyun, D., Diaz, D. A., & Maruca, A. T. (2020). An examination of

simulation prebriefing in nursing education: An integrative review. Nursing

Education Perspectives, 41(6), 345–348.

https://doi.org/10.1097/01.NEP.0000000000000689

Docherty, A., Warkentin, P., Borgen, J., Garthe, K., Fischer, K. L., & Najjar, R. H.

(2018). Enhancing student engagement: Innovative strategies for intentional

learning. Journal of Professional Nursing, 34(6), 470–474.

https://doi.org/10.1016/j.profnurs.2018.05.001

98
Dreifuerst, K. T. (2012). Using debriefing for meaningful learning to foster development

of clinical reasoning in simulation. Journal of Nursing Education, 51(6), 326–

333. https://doi.org/10.3928/01484834-20120409-02

Dreifuerst KT. (2009). The essentials of debriefing in simulation learning: A concept

analysis. Nursing Education Perspectives, 30(2), 109–114.

Duffy, J. R., Frenn, M., & Patterson, B. (2011). Advancing nursing education science: An

analysis of the NLN’s grants program 2008–2010. Nursing Education

Perspectives, 32(1), 10–13. https://doi.org/10.5480/1536-5026-32.1.10 Fawaz, M.

A., & Hamdan-Mansour, A. M. (2016). Impact of high-fidelity simulation on the

development of clinical judgment and motivation among Lebanese nursing

students. Nurse Education Today, 46, 36–42.

https://doi.org/10.1016/j.nedt.2016.08.026

Fawaz, M. & Hamdan-Mansour, A. M. (2016). Impact of high-fidelity simulation on the

development of clinical judgment and motivation among Lebanese nursing

students. Nurse Education Today, 46, 36-42. DOI: 10.1016/j.nedt.2016.08.026

Ferguson, N. F., & Estis, J. M. (2018). Training students to evaluate preterm infant

feeding safety using a video-recorded patient simulation approach. American

Journal of Speech-Language Pathology, 27(2), 566–573.

https://doi.org/10.1044/2017_ajslp-16-0107

Fey, M. K., & Kardong-Edgren, S. (2017). State of research on simulation in nursing

education programs. Journal of Professional Nursing, 33(6), 397–398.

https://doi.org/10.1016/j.profnurs.2017.10.009

99
Fogg, N., Kubin, L., Wilson, C. E., & Trinka, M. (2020). Using virtual simulation to

develop clinical judgment in undergraduate nursing students. Clinical Simulation

in Nursing, 48, 55–58. https://doi.org/10.1016/j.ecns.2020.08.010

Fosnot, C. T., & Perry, R. (2005). Constructivism: A psychological theory of learning. In

C. T. Fosnot (Ed.), Constructivism: Theory, perspectives, and practice (pp. 8-38).

New York, NY: Teachers College, Columbia University.

Gubrud-Howe P., (2008). Development of clinical judgment in nursing students: A

learning framework to use in designing and implementing simulated learning

experiences (Unpublished dissertation). Portland State University, Portland, OR.

Goodare, P. (2017). Literature review: Why do we continue to lose our nurses? The

Australian Journal of Advanced Nursing: A Quarterly Publication of the Royal

Australian Nursing Federation, 34, 50–56.

https://www.ajan.com.au/archive/Vol34/Issue4/6Goodare.pdf

Haddad, L. M., Annamaraju, P., & Toney-Butler, T. J. (2022). Nursing shortage. In

StatPearls. StatPearls Publishing.

http://www.ncbi.nlm.nih.gov/books/NBK493175/

Hadenfeldt, C. J., Naylor, H. M., & Aufdenkamp, M. A. (2021). Escape the pharmacy:

An active learning strategy for the nursing pharmacology classroom. Nursing

Education Perspectives, 42(6), E161–162.

https://doi.org/10.1097/01.NEP.0000000000000742

Hamers, J. & Csapo, B. (1999). Teaching thinking. In J. H. M. Hammers, J. E. H. van

Luit & B. Csapo (Eds.), Teaching and learning thinking skills (pp. 11-36). Swets

and Zeitlinger.

100
Hanshaw, S. L., & Dickerson, S. S. (2020, July). High fidelity simulation evaluation

studies in nursing education: A review of the literature. Nurse Education in

Practice, 46(102818). https://doi.org/10.1016/j.nepr.2020.102818

Hardman, J., & Ng’ambi, D. (2003). A questioning environment for scaffolding learners’

questioning engagement with academic text: A university case study. South

African Journal of Higher Education, 17(2), 139–146. https://doi.org/10.5480/13-

1130.1

Hayden, J., Keegan, M., Kardong-Edgren, S., & Smiley, R. A. (2014). Reliability and

validity testing of the Creighton competency evaluation instrument for use in the

NCSBN national simulation study. Nursing Education Perspectives; New York,

35(4), 244–252. https://doi.org/10.5480/13-1130.1

Herron, E. K., Powers, K., Mullen, L., & Burkhart, B. (2019). Effect of case study versus

video simulation on nursing students’ satisfaction, self-confidence, and

knowledge: A quasi-experimental study. Nurse Education Today, 79, 129–134.

https://doi.org/10.1016/j.nedt.2019.05.015

Hines, C. B., & Wood, F. G. (2016). Clinical judgment scripts as a strategy to foster

clinical judgments. Journal of Nursing Education, 55(12), 691–695.

https://doi.org/10.3928/01484834-20161114-05

Hober, C., & Bonnel, W. (2014). Student perceptions of the observer role in high-fidelity

simulation. Clinical Simulation in Nursing, 10(10), 507–514.

https://doi.org/10.1016/j.ecns.2014.07.008

Hoffman, K.A., Aitken, L.M., & Duffield, C. (2009). A comparison of novice and expert

nurses’ cue collection during clinical decision-making: Verbal protocol analysis.

101
International Journal of Nursing Studies, 46(10), 1335–1344.

https://doi.org/10.1016/j.ijnurstu.2009.04.001

Holland, T. (2020). Educational strategies to foster empathy utilizing simulation

pedagogy. International Journal of Caring Sciences, 13(3), 1589–1595.

https://doi.org/10.1016/j.nedt.2018.02.013

Horton, S. L. (2008). Lev goes to college: Reflections on implementing Vygotsky’s ideas

in higher education. International Journal of Learning, 15(4), 13–17.

Hovancsek, M. (2007). Using simulation in nurse education. In P. R. Jeffries (Ed.),

Simulation in nursing education: From conceptualization to evaluation (pp. 1-9).

National League for Nursing.

Howard, S. (2021). An evaluation of the defined observer role in simulation with

baccalaureate nursing students. Nursing Education Perspectives, 42(2), 110–112.

https://doi.org/10.1097/01.NEP.0000000000000603

Huston, C. L., Phillips, B., Jeffries, P., Todero, C., Rich, J., Knecht, P., Sommer, S., &

Lewis, M. P. (2018). The academic‐practice gap: Strategies for an enduring

problem. Nursing Forum, 53(1), 27–34. https://doi.org/10.1111/nuf.12216

Hydo, S. K., Marcyjanik, D. L., Zorn, C. R., & Hooper, N. M. (2007). Art as a

scaffolding teaching strategy in baccalaureate nursing education. International

Journal of Nursing Education Scholarship, 4(1), 1–13.

https://doi.org/10.2202/1548-923x.1330

INACSL Standards Committee (2016). INACSL Standards of Best Practice:

SimulationSM Debriefing. (2016). Clinical Simulation in Nursing, 12, 21–25.

https://doi.org/10.1016/j.ecns.2016.09.008

102
Jarvill, M. (2021). Nursing student medication administration performance: A

longitudinal assessment. Nurse Educator, 46(1), 59–62.

https://doi.org/10.1097/NNE.0000000000000828

Jeffries, P. R. (2015). NLN Jeffries simulation theory: Brief narrative description.

Nursing Education Perspectives, 36(5), 292–293.

Jeffries, P. R. (2005). A framework for designing, implementing, and evaluating:

Simulations used as teaching strategies in nursing. Nursing Education

Perspectives, 26(2), 96–103.

Jensen, R. (2013). Clinical reasoning during simulation: Comparison of student and

faculty ratings. Nurse Education in Practice, 13(1), 23–28.

https://doi.org/10.1016/j.nepr.2012.07.001

Johnson, B. K. (2019). Simulation observers learn the same as participants: The evidence.

Clinical Simulation in Nursing, 33, 26–34.

https://doi.org/10.1016/j.ecns.2019.04.006

Johnson, E. A., Lasater, K., Hodson-Carlton, K., Siktberg, L., Sideras, S., & Dillard, N.

(2012). Geriatrics in simulation: Role modeling and clinical judgment effect.

Nursing Education Perspectives, 33(3), 176–180. https://doi.org/10.5480/1536-

5026-33.3.176

Kavanagh, J., & Sharpnack, P. (2021). Crisis in Competency: A Defining Moment in

Nursing Education. OJIN: The Online Journal of Issues in Nursing, 26(1).

https://doi.org/10.3912/OJIN.Vol26No01Man02

Kavanaugh, J., & Szweda, C. (2017). A crisis in competency: The strategic and ethical

imperative to assessing new graduate nurses’ clinical reasoning. Nursing

103
Education Perspectives. 38(2), 57-62.

https://doi.org/10.1097/01.nep.0000000000000112

Kelly, M. A., Slatyer, S., Myers, H., Gower, S., Mason, J., & Lasater, K. (2022). Using

audio-visual simulation to elicit nursing students’ noticing and interpreting skills

to assess pain in culturally diverse patients. Clinical Simulation in Nursing, 71,

31–40. https://doi.org/10.1016/j.ecns.2022.06.003

Kidd, S. E. (2017). Factors contributing to clinical judgment development in nursing

students during simulation using the Creighton Competency Evaluation

Instrument [Doctoral Dissertation, East Carolina University]. The Scholarship.

http://hdl.handle.net/10342/6372

Kim, S.-J., Kim, S., Kang, K.-A., Oh, J., & Lee, M.-N. (2016). Development of a

simulation evaluation tool for assessing nursing students’ clinical judgment in

caring for children with dehydration. Nurse Education Today, 37, 45–52.

https://doi.org/10.1016/j.nedt.2015.11.011

Kim, Y.-J., Noh, G.-O., & Im, Y.-S. (2017). Effect of step-based prebriefing activities on

flow and clinical competency of nursing students in simulation-based education.

Clinical Simulation in Nursing, 13(11), 544–551.

https://doi.org/10.1016/j.ecns.2017.06.005

Kinyon, K., D’Alton, S., Poston, K., & Navarrete, S. (2021). Improving physical

assessment and clinical judgment skills without increasing content in a

prelicensure nursing health assessment course. Nursing Reports, 11(3), 600–607.

https://doi.org/10.3390/nursrep11030057

104
Klenke-Borgmann, L., Cantrell, M. A., & Mariani, B. (2020). Nurse educators’ guide to

clinical judgment: A review of conceptualization, measurement, and

development. Nursing Education Perspectives, 41(4), 215–221.

https://doi.org/10.1097/01.NEP.0000000000000669

Klenke-Borgmann, L., Cantrell, M. A., & Mariani, B. (2021). Clinical Judgment in

Nursing Students After Observation of In-Class Simulations. Clinical Simulation

in Nursing, 51, 19–27. CINAHL Complete.

https://doi.org/10.1016/j.ecns.2020.11.006

Kool, A. (2022). Virtual Simulation: Impact on Clinical Judgment. Oklahoma Nurse,

67(2), 18–18. CINAHL Complete.

Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and

development. Englewood Cliffs, NJ: Prentice-Hall.

Korpi, H., Peltokallio, L., & Piirainen, A. (2018). Problem-based learning in professional

studies from the physiotherapy students’ perspective. The Interdisciplinary

Journal of Problem-Based Learning, 13(1). https://doi.org/10.7771/1541-

5015.1732

Kuiper, R., Heinrich, C., Matthias, A., Graham, M. J., & Bell-Kotwall, L. (2008).

Debriefing with the OPT Model of Clinical Reasoning during high fidelity patient

simulation. International Journal of Nursing Education Scholarship, 5(1).

https://doi.org/10.2202/1548-923X.1466

Lasater K. (2007a). High-fidelity simulation and the development of clinical judgment:

Students’ experiences. Journal of Nursing Education, 46(6), 269–276.

://doi.org/10.3928/01484834-20070601-06

105
Lasater K. (2007b). Clinical judgment development: Using simulation to create an

assessment rubric. Journal of Nursing Education, 46(11), 496–503.

https://doi.org/10.3928/01484834-20071101-04

Lasater, K., Nielsen, A. E., Stock, M., & Ostrogorsky, T. L. (2015). Evaluating the

clinical judgment of newly hired staff Nurses. The Journal of Continuing

Education in Nursing, 46(12), 563–571.

http://dx.doi.org.proxy.ulib.uits.iu.edu/10.3928/00220124-20151112-09

Lavoie, P., Pepin, J., Cossette, S., & Clarke, S. P. (2019). Debriefing approaches for

high-fidelity simulations and outcomes related to clinical judgment in

baccalaureate nursing students. Collegian, 26(5), 514–521. CINAHL Complete.

https://doi.org/10.1016/j.colegn.2019.01.001

Leijser, J., & Spek, B. (2021). Level of clinical reasoning in intermediate nursing

students explained by education year and days of internships per healthcare

branches: A cross – sectional study. Nurse Education Today, 96.

https://doi.org/10.1016/j.nedt.2020.104641

Leslie, J., Smith, C. R., Little, M. K., Schwytzer, D. J., Goodin, J., Rota, M. C., & Glazer,

G. (2020). Evaluation of a brief mindfulness strategy in the classroom: A

feasibility study. International Journal of Nursing Education, 12(4), 68–73.

https://doi.org/10.37506/ijone.v12i4.11219

Levett-Jones, T., Hoffman, K., Dempsey, J., Jeong, S., Noble, D., Norton, C. A., Roche,

J., & Hickey, N. (2010). The ‘five rights’ of clinical reasoning: An educational

model to enhance nursing students’ ability to identify and manage clinically ‘at

106
risk’ patients. Nurse Education Today, 30(6), 515–520.

https://doi.org/10.1016/j.nedt.2009.10.020

Lioce, L. (2020). Healthcare simulation dictionary (2nd Ed.). Agency for Healthcare

Research and Quality. https://doi.org/10.23970/simulationv2

Lopez, V., Anderson, J., West, S., & Cleary, M. (2022). Does the COVID-19 pandemic

further impact nursing shortages? Issues in Mental Health Nursing, 43(3), 293–

295. https://doi.org/10.1080/01612840.2021.1977875

Lujan, J., & Vasquez, R. (2010). A case study of the Scaffolding Clinical Practicum

Model: Is it culturally competent for Hispanic nursing students? Journal of

Nursing Education, 49(7), 394–397. https://doi.org/10.3928/01484834-20100224-

05

MacLean, H., Janzen, K. J., & Angus, S. (2019). Lived experience in simulation: Student

perspectives of learning from two lenses. Clinical Simulation in Nursing, 31, 1–8.

https://doi.org/10.1016/j.ecns.2019.03.004

MacLeod, M., & van der Veen, J. T. (2020). Scaffolding interdisciplinary project-based

learning: A case study. European Journal of Engineering Education, 45(3), 363–

377. https://doi.org/10.1080/03043797.2019.1646210

Manz, J. A., Iverson, L. M., Hawkins, K., Tracy, M. E., Hercinger, M., & Todd, M.

(2022). Assessing student performance in a dedicated education unit: Validity of

the creighton competency evaluation instrument. Nursing Education Perspectives,

43(3), 184–186. https://doi.org/10.1097/01.nep.0000000000000838

Mariani, B., Cantrell, M. A., Meakim, C., Prieto, P., & Dreifuerst, K. T. (2013).

Structured debriefing and students’ clinical judgment abilities in simulation.

107
Clinical Simulation in Nursing, 9(5), e147-55.

https://doi.org/10.1016/j.ecns.2011.11.009

Mariai, B., Fey, M. & Gloe, D. (2018). The simulation research rubric: A pilot study

evaluating published simulation studies. Clinical Simulation in Nursing, 22, 1-4.

DOI: 10.1016/j.ecns.2018.06.003

McCullagh, J. F. (2012). How can video supported reflection enhance teachers’

professional development? Cultural Studies of Science Education, 7(1), 137–152.

http://dx.doi.org/10.1007/s11422-012-9396-0

McDermott, D. S. (2016). The Prebriefing Concept: A Delphi Study of CHSE

Experts. Clinical Simulation in Nursing, 12(6), 219–227. https://doi-

org.proxy.ulib.uits.iu.edu/10.1016/j.ecns.2016.02.001

McDermott, D. S. (2020). Prebriefing: A historical perspective and evolution of a model

and strategy (know: do: teach). Clinical Simulation in Nursing, 49, 40–49.

https://doi.org/10.1016/j.ecns.2020.05.005

McDermott, D. S., Ludlow, J., Horsley, E., & Meakim, C. (2021). Healthcare Simulation

Standards of Best Practice TM prebriefing: Preparation and briefing. Clinical

Simulation In Nursing, 58, 9–13. https://doi.org/10.1016/j.ecns.2021.08.008

Meijerman, I., Nab, J., & Koster, A. S. (2016). Designing and implementing an inquiry-

based undergraduate curriculum in pharmaceutical sciences. Currents in

Pharmacy Teaching & Learning, 8(6), 905–919.

https://doi.org/10.1016/j.cptl.2016.08.001

Mistry, V. (2011). Critical care training: Using Twitter as a teaching tool. British Journal

of Nursing, 20(20), 1292–1296. https://doi.org/10.12968/bjon.2011.20.20.1292

108
Motlhaka, H. (2020). Blackboard collaborated-based instruction in an academic writing

class: Sociocultural perspectives of learning. Electronic Journal of E-Learning,

18(4), 337–346. doi: 10.34190/EJEL.20.18.4.006

Najjar, R. H., Lyman, B., & Miehl, N. (2015). Nursing students’ experiences with high-

fidelity simulation. International Journal of Nursing Education Scholarship,

12(1), 27–35. https://doi.org/10.1515/ijnes-2015-0010

National Academies of Sciences, Engineering, and Medicine. (2018). How People Learn

II: Learners, Contexts, and Cultures. The National Academies Press.

https://doi.org/10.17226/24783

Neville, P. (2018). Introducing dental students to reflective practice: A dental educator’s

reflections. Reflective Practice, 19(2), 278–290.

https://doi.org/10.1080/14623943.2018.1437400

Nguyen, M.A. (2017). Liberal education and the connection with Vygotsky’s theory of

the zone of proximal development. Cultural-Historical Psychology, 13(1), 81–88.

https://doi.org/10.17759/chp.2017130108

Nichols, T., & Nichols, L. S. (2006). 2+2+2: An equation for Native American student

success. In M. B. Lee (Ed.), Ethnicity matters: Rethinking how black, Hispanic, &

Indian students prepare for & succeed in college (pp. 57–80). Peter Lnag

Publishing. https://www.ulib.iupui.edu/cgi-

bin/proxy.pl?url=https://search.ebscohost.com/login.aspx?direct=true&db=eue&

AN=39347342&site=ehost-live

109
Norman, J. (2018). Differences in learning outcomes in simulation: The observer role.

Nurse Education in Practice, 28, 242–247.

https://doi.org/10.1016/j.nepr.2017.10.025

Notarnicola, I., Petrucci, C., De Jesus Barbosa, M. R., Giorgi, F., Stievano, A., & Lancia,

L. (2016). Clinical competence in nursing: A concept analysis. Professioni

Infermieristiche, 69(3), 174–181. https://doi.org/10.7429/pi.2016.693181

Nurakhir, A., Palupi, F. N., Langeveld, C., & Nurmalia, D. (2020). Students’ views of

classroom debates as a strategy to enhance critical thinking and oral

communication skills. Nurse Media Journal of Nursing, 10(2), 130–145.

https://doi.org/10.14710/nmjn.v10i2.29864

Page-Cutrara, K., & Turk, M. (2017). Impact of prebriefing on competency performance,

clinical judgment and experience in simulation: An experimental study. Nurse

Education Today, 48, 78–83. https://doi.org/10.1016/j.nedt.2016.09.012

Paige, J. B. (2016). More work needed! Analysis of fuzzy concepts in simulation-based

learning. Journal of Nursing Education, 55(2), 63–64.

https://doi.org/10.3928/01484834-20160114-01

Paige, J. B., & Morin, K. H. (2013). Simulation fidelity and cueing: A systematic review

of the literature. Clinical Simulation in Nursing, 9(11), 481-489.

https://doi.org/10.1016/j.ecns.2013.01.001

Palancia Esposito, C., & Sullivan, K. (2020). Maintaining clinical continuity through

virtual simulation during the COVID-19 pandemic. Journal of Nursing

Education, 59(9), 522–525. https://doi.org/10.3928/01484834-20200817-09

110
Pallant, J. (2020). SPSS Survival Manual: A step by step guide to data analysis using

IBM SPSS (7th ed.). Routledge. https://doi.org/10.4324/9781003117452

Pardue, K. T., Holt, K., Dunbar, D.-M., & Baugh, N. (2023). Exploring the Development

of Nursing Clinical Judgment Among Students Using Virtual Reality Simulation.

Nurse Educator, 48(2), 71–75. https://doi.org/10.1097/NNE.0000000000001318

Perbone Nunes, J. G., Lasater, K., de Souza Oliveira-Kumakura, A. R., Garbuio, D. C.,

Merizio Martins Braga, F. T., & de Carvalho, E. C. (2016). Adaptation of the

Lasater Clinical Judgment Rubric to the Brazilian culture. Journal of Nursing

UFPE / Revista de Enfermagem UFPE, 10, 4828–4836.

https://doi.org/10.5205/reuol.8200-71830-3-SM.1006sup201615

Poledna, M., Gomez-Morales, A., & Hahler, D. (2022). Nursing students’ cue recognition

in educational simulation. Nurse Educator, 47(5). DOI:

10.1097/NNE.0000000000001198

Polit, D.F. & Tatano Beck, C. (2008). Nursing Research. Generating and Assessing

Evidence for Nursing Practice. Wolters Kluwer Health, Philadelphia.

Postma, T. C., & White, J. G. (2015). Developing clinical reasoning in the classroom—

Analysis of the 4 C/ ID-model. European Journal of Dental Education, 19(2), 74–

80. https://doi.org/10.1111/eje.12105

Powell-Laney, S., Keen, C., & Hall, K. (2012). The use of human patient simulators to

enhance clinical decision-making of nursing students. Education for Health,

25(1), 11-15. https://doi.org/10.4103/1357-6283.99201

Powers, K. (2020). Bringing simulation to the classroom using an unfolding video patient

scenario: A quasi-experimental study to examine student satisfaction, self-

111
confidence, and perceptions of simulation design. Nurse Education Today, 86.

https://doi.org/10.1016/j.nedt.2019.104324

Drake, J. R. (2012). A critical analysis of active learning and an alternative pedagogical

framework for introductory information systems courses. Journal of Information

Technology Education: Innovations in Practice, 11, 39–52.

https://doi.org/10.28945/1546

Raman, S., Labrague, L. J., Arulappan, J., Natarajan, J., Amirtharaj, A., & Jacob, D.

(2019). Traditional clinical training combined with high‐fidelity simulation‐based

activities improves clinical competency and knowledge among nursing students

on a maternity nursing course. Nursing Forum, 54(3), 434–440.

https://doi.org/10.1111/nuf.12351

Reid, C. A., Ralph, J. L., El-Masri, M., & Ziefle, K. (2020). High-fidelity simulation and

clinical judgment of nursing students in a maternal–newborn course. Western

Journal of Nursing Research, 42(10), 829–837.

https://doi.org/10.1177/0193945920907395

Reime, M. H., Johnsgaard, T., Kvam, F. I., Aarflot, M., Engeberg, J. M., Breivik, M., &

Brattebø, G. (2017). Learning by viewing versus learning by doing: A

comparative study of observer and participant experiences during an

interprofessional simulation training. Journal of Interprofessional Care, 31(1),

51–58. https://doi.org/10.1080/13561820.2016.1233390

Roberts, P. (2018). Developing reflection through an eportfolio-based learning

environment: Design principles for further implementation. Technology,

112
Pedagogy and Education, 27(3), 313–326.

https://doi.org/10.1080/1475939X.2018.1447989

Rode, J. L., Callihan, M. L., & Barnes, B. L. (2016). Assessing the value of large-group

simulation in the classroom. Clinical Simulation in Nursing, 12(7), 251–259.

https://doi.org/10.1016/j.ecns.2016.02.012

Rodziewicz, T. L., Houseman, B., & Hipskind, J. E. (2022). Medical error reduction and

prevention. In StatPearls. StatPearls Publishing.

http://www.ncbi.nlm.nih.gov/books/NBK499956/

Rogers, B. A., & Franklin, A. E. (2022). Describing Learners’ Clinical Judgment

Trajectory After Observing Expert Modeling Videos: A Mixed Methods Study.

Clinical Simulation in Nursing, 73, 37–47.

https://doi.org/10.1016/j.ecns.2022.08.001

Roh, Y. S., & Jang, K. I. (2017). Survey of factors influencing learner engagement with

simulation debriefing among nursing students. Nursing & Health Sciences, 19(4),

485–491. https://doi.org/10.1111/nhs.12371

Román-Cereto, M., García-Mayor, S., Kaknani-Uttumchandani, S., García-Gámez, M.,

León-Campos, A., Fernández-Ordóñez, E., Ruiz-García, M. L., Martí-García, C.,

López-Leiva, I., Lasater, K., & Morales-Asencio, J. M. (2018). Cultural

adaptation and validation of the Lasater Clinical Judgment Rubric in nursing

students in Spain. Nurse Education Today, 64, 71–78.

Rotsaert, T., Panadero, E., & Schellens, T. (2018). Anonymity as an instructional scaffold

in peer assessment: Its effects on peer feedback quality and evolution in students’

113
perceptions about peer assessment skills. European Journal of Psychology of

Education, 33(1), 75–99. https://doi.org/10.1007/s10212-017-0339-8

Roy, L. R. (2016). Baccalaureate Nursing Students’ Perceptions of Simulation and the

Development of Clinical Judgment. Retrieved from

http://hdl.handle.net/10755/601897

Sakraida, T. J. (2020). Writing-in-the-discipline with instructional scaffolding in an RN-

to-BSN nursing research course. Journal of Nursing Education, 59(3), 179–180.

https://doi.org/10.3928/01484834-20200220-15

Sharoff, L. (2015). Simulation: Pre-briefing preparation, clinical judgment and reflection.

What is the connection? Journal of Contemporary Medicine, 5(2), 88-101.

https://doi.org/10.16899/ctd.49922

Sharpnack, P. A., Goliat, L., Baker, J. R., Rogers, K., & Shockey, P. (2013). Thinking

like a nurse: Using video simulation to rehearse for professional practice. Clinical

Simulation in Nursing, 9(12), 571–577. https://doi.org/10.1016/j.ecns.2013.05.004

Shin, H., Gi Park, C., & Shim, K. (2015). The Korean version of the Lasater Clinical

Judgment Rubric: A validation study. Nurse Education Today, 35(1), 68–72.

https://doi.org/10.1016/j.nedt.2014.06.009

Shin, H., & Kim, M. J. (2014). Evaluation of an integrated simulation courseware in a

pediatric nursing practicum. Journal of Nursing Education, 53(10), 589-594. 10.

3928/01484834-20140922-05

Shin, H., Sok, S., Hyun, K. S., & Kim, M. J. (2015). Competency and an active learning

program in undergraduate nursing education. Journal of Advanced Nursing, 71(3),

591–598. https://doi.org/10.1111/jan.12564

114
Shinnick, M. A., & Cabrera-Mino, C. (2021). Predictors of nursing clinical judgment in

simulation. Nursing Education Perspectives, 42(2), 107–109.

https://doi.org/10.1097/01.NEP.0000000000000604

Sideras S., (2007). An examination of the construct validity of a clinical judgment

evaluation tool in the setting of high-fidelity simulation (Unpublished

dissertation). Oregon Health & Science University, Portland, OR.

Smagorinsky, P. (2018). Is instructional scaffolding actually Vygotskian, and why should

it matter to literacy teachers? Journal of Adolescent & Adult Literacy, 62(3), 253–

257. https://doi.org/10.1002/jaal.756

Solli, H., Haukedal, T. A., Husebø, S. E., & Reierson, I. Å. (2020). The art of balancing:

The facilitator’s role in briefing in simulation-based learning from the perspective

of nursing students – a qualitative study. BMC Nursing, 19, Article 99.

https://doi.org/10.1186/s12912-020-00493-z

Stanley, M. J. C., & Dougherty, J.P. (2010). Nursing education model. A paradigm shift

in nursing education: A new model. Nursing Education Perspectives, 31(6), 378–

380. https://doi.org/10.1043/1536-5026-31.6.378

Stephen, L., Kostovich, C., & O’Rourke, J. (2020). Psychological safety in simulation:

Prelicensure nursing students’ perceptions. Clinical Simulation in Nursing, 47,

25–31. https://doi.org/10.1016/j.ecns.2020.06.010

Strickland, H. P., Cheshire, M. H., & March, A. L. (2017). Clinical judgment during

simulation: A comparison of student and faculty scores. Nursing Education

Perspectives, 38(2), 85–86. https://doi.org/10.1097/01.NEP.0000000000000109

115
Tanner, C. A. (2006). Thinking like a nurse: A research-based model of clinical judgment

in nursing. Journal of Nursing Education, 45(6), 204–211.

https://doi.org/10.3928/01484834-20060601-04

Takahashi, E. (1998). Language development in social interaction: A longitudinal study

of a Japanese FLES program from a Vygotskyan approach. Foreign Language

Annals, 31(3), 392–406.

Tchounikine, P. (2019). Framing design for appropriation with zones of proximal

evolution: Email for PIM. International Journal of Human-Computer Studies,

123, 18–28. https://doi.org/10.1016/j.ijhcs.2018.11.004

Tedesco-Schneck, M. (2013). Active learning as a path to critical thinking: Are

competencies a roadblock? Nurse Education in Practice, 13(1), 58–60.

https://doi.org/10.1016/j.nepr.2012.07.007

Thiele, J. E., Baldwin, J. H., Hyde, R. S., Sloan. B., & Strandquist, G. A. (1986). An

investigation of decision theory: What are the effects of teaching cue recognition?

Journal of Nursing Education, 25(8), 319–324. https://doi.org/10.3928/0148-

4834-19861001-05

Todd, M., Manz, J. A., Hawkins, K. S., Parsons, M. E., & Hercinger, M. (2008). The

development of a quantitative evaluation tool for simulations in nursing

education. International Journal of Nursing Education Scholarship, 5(1).

https://doi.org/10.2202/1548-923X.1705

Tutticci, N., Ryan, M., Coyer, F., & Lewis, P. A. (2018). Collaborative facilitation of

debrief after high-fidelity simulation and its implications for reflective thinking:

116
student experiences. Studies in Higher Education, 43(9), 1654–1667.

https://doi.org/10.1080/03075079.2017.1281238

Victor-Chmil, J., & Larew, C. (2013). Psychometric Properties of the Lasater Clinical

Judgment Rubric. International Journal of Nursing Education Scholarship, 10(1),

45–52.

Victor, J. (2017). Improving clinical nursing judgment in prelicensure students. Journal

of Nursing Education, 56(12), 733–736. https://doi.org/10.3928/01484834-

20171120-05

Victor, J., Chavez, L. S., & Podlesney, C. (2021). Do predictor exams really predict

readiness for professional nursing practice? Clinical Simulation in Nursing, 50,

48–54. https://doi.org/10.1016/j.ecns.2020.10.005

Victor, J., Ruppert, W., & Ballasy, S. (2017). Examining the relationships between

clinical judgment, simulation performance, and clinical performance. Nurse

Educator, 42(5), 236–239. https://doi.org/10.1097/NNE.0000000000000359

Vreugdenhil, J., & Spek, B. (2018). Development and validation of Dutch version of

Lasater Clinical Judgment Rubric in hospital practice: An instrument design

study. Nurse Education Today, 62, 43–51.

https://doi.org/10.1016/j.nedt.2017.12.013

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological

processes. Massachusetts: Harvard University Press.

Wheeler, J., Dudas, K., & Brooks, G. (2021). Anxiety and a mindfulness exercise in

healthcare simulation prebriefing. Clinical Simulation in Nursing, 59, 61–66.

https://doi.org/10.1016/j.ecns.2021.05.008

117
Williams, B., French, J., & Brown, T. (2009). Can interprofessional education DVD

simulations provide an alternative method for clinical placements in nursing?

Nurse Education Today, 29(6), 666–670.

https://doi.org/10.1016/j.nedt.2009.02.008

Yuan, H., Williams, B., & Man, C. (2014). Nursing students’ clinical judgment in high

fidelity simulation based learning: A quasi-experimental study. Journal of

Nursing Education and Practice, 4(5), 7-15.

118
Curriculum Vitae

Emily S. McIntire

Education

2018 – 2024 PhD, Nursing Science-Health Systems, Indiana University-


Purdue University Indianapolis

2010 – 2013 MSN, Nursing Education, Ferris State University, Big


Rapids, MI

2004 – 2005 BS, Nursing Science, Ferris State University, Big Rapids,
MI

2002 – 2004 AAS, Nursing, Ferris State University, Big Rapids, MI

1999 – 2002 Pre-Nursing, Central Michigan University, Mt. Pleasant,


MI

LICENSE & CERTIFICATION

2021 – 2023 Basic Life Support for Healthcare Provider Instructor,


American Heart Association

2020 – Present Certified Healthcare Simulation Educator, Society for


Simulation in Healthcare

2004 – Present Registered Nurse, Michigan

PROFESSIONAL EMPLOYMENT

Academic Appointments

03/2013 – Present Simulation Lab Coordinator, Instructor, Michigan State


College of Nursing, East Lansing, MI

01/2010 – 07/2013 Adjunct Nursing Faculty, Lansing Community College,


Lansing, MI

Clinical Appointments

08/2011 – 05/2013 Staff Nurse, Pain Clinic, Sparrow Health Systems, Lansing,
MI
08/2008 – 06/2010 Charge Nurse, Dialysis, Fresenius Medical Care, Charlotte,
MI

08/2007 – 07/2008 Charge Nurse, Interventional Physiatry, Centis Health –


The Spine Center, East Lansing, MI

06/2004 – 08/2007 Staff Nurse and Charge Nurse, Critical Care Surgical
Stepdown, Sparrow Health Systems, Lansing, MI

HONORS & AWARDS

2020, 2023 Recipient, #iteachmsu Educator Award, Michigan State University

2018 Recipient, Billie Diane Gamble Undergraduate Faculty Teaching


Excellence/Enrichment Award, Michigan State University

2017 Recipient, Michigan State University, Lilly Teaching Fellows

2012 Recipient, Ferris State University, Howard, Irene, and Michael


Price Scholarship

2003 Recipient, Ferris State University, Student Award of Nursing


Excellence

GRANT FUNDING

Research
2023 – 2024 PhD student. Midwest Nursing Research Society: A
Simulation Pre-Brief Scaffold to Support Clinical Judgment
and Independence in Clinical Judgment Decision Making.
Funded in 2023 by Indiana University School of Nursing.
($1,500)

2023 – ongoing Co-I. AACN Faculty Scholars Grant Program – An Equity


Workshop for BSN Students. Submitted 07/27/23, Under
Review

2022 – 2023 Consultant. Sustainable Health at MSU: Clinical


Simulation: An Innovative Path to Sustainable Health.
Funded 2022 by Michigan State University (C. Jensen, PI;
$5,000,000)

2017 – 2018 Fellow. Lilly Fellowship SOTL Program. Funded by


Michigan State University ($12,260)
PUBLICATIONS

Liu, C.C., McIntire, E., Sender, J. &. Ling, J. (under review, July 2023). Teaching
Social Determinants of Health in BSN Programs: An Integrative Review of
Strategies and Effectiveness. Nurse Educator. (IF=2.6)

Liu, C.C., Ling, J., Liu, C., Ammigan, R., Schrader, K. & McIntire, E. (2022).
Vaccination Rates Among International Students: Insights from a University
Health Vaccination Initiative. The Journal of American College Health. doi:
10.1080/07448481.2022.2155470 (IF=2.4)

PRESENTATIONS

Refereed Podium – National

McIntire, E. & Schaffrath, M. (2019, February). RN Case in a Box: A Novel


Strategy for Active Learning Environments in Nursing Education [Podium
Presentation]. Wolters Kluwer Health Learning, Research & Practice -
Lippincott Nursing Education Innovation Summit, Fort Lauderdale, FL.

Chan, R. & McIntire, E. (2017, October). Embodied Nursing Pedagogy: Walking a


Mile in the Patient's Shoes [Podium Presentation]. Association for
Contemplative Mind in Higher Education (ACMHE), Scotts Valley, CA.

Refereed Poster – International

McIntire, E. & Harris, C. (2020, June). A Patient Dies: A Simulation Enhanced


Active Learning Experience to Improve Student Preparedness for Using
SBAR [Poster presentation]. INACSL 2020 Conference, virtual.

Refereed Poster – State

McIntire, E., Palmer, K. & Skuras, A. (2022, October). Effects of a Simulation on


Nursing Students’ Disaster Preparedness [Poster presentation]. Michigan
Nursing Summit, Traverse City, MI.

Refereed Poster – Local

McIntire, E., Sender, J. & Poindexter, K. (2018, May). Faculty and Student
Conceptions on Student-Centered Learning: A Quality Initiative Project.
Michigan State University Spring Conference for Teaching and learning,
East Lasing, MI.
INVITED PRESENTATIONS

Invited Podium – National

McIntire, E. & Schaffrath, M. (2021, September). A Novel Strategy for Active


Learning Environments in Nursing Education [Podium Presentation].
National League of Nursing 2021 Summit, Washington D.C.

Invited Podium – Local

Chan, R. & McIntire, E. (2017, December). The Role of Contemplative Science in


Nursing Curriculum. MSU CON Teaching Commons, East Lansing, MI.

McIntire, E. & West, P. (2013, July). Online Learning: Strategies and Skills for
Success. Michigan State University College of Nursing, East Lansing, MI.

McIntire, E. (2013, January). Mentoring, Teaching Strategies, and Educational


Techniques for Clinical Faculty. Lansing Community College, Lansing, MI.

LEADERSHIP & MEMBERSHIP IN PROFESSIONAL ORGANIZATIONS

2019 – present Member, International Nursing Association for Clinical


Simulation and Learning

2014 – 2018 Member, sigma Theta Tau International

2014 – 2015 Communication Coordinator (content solicitor and editor


for the Nursing Section Newsletter), Society for Simulation
in Healthcare

2010 – 2014 Member, Michigan Education Association

2009 – present Member, National League for Nursing

2009 – 2014 Member, Michigan Association for Higher Education

2003 – 2005 Member, Student Nurses Association, Ferris State


University

COLLEGE OF NURSING SERVICE - MSU

2023 Member, Search Committee for Simulation Nurse Educator

2023 Member Search Committee for Detroit Site Coordinator

2023 Member, Search Committee for Nurse Planner


2023 Member, Search Committee for Associate Dean for College
Performance and Professional Development

2022 Member, Search Committee for Detroit Simulation


Educator

2021 – 2023 Member, SANE Program Initiation and Implementation


Workgroup

2021 – 2023 Member, Undergraduate Programs Committee

2021 – 2023 Member, Simulation Operations Committee

2021 – 2022 Member, Simulation Review Team

2020 – 2022 Member, CON COVID Re-Entry Task Force

2018 – 2021 Member, Undergraduate Programs Committee

2015 – 2018 Communications Committee, Alpha Psi Sigma Theta Tau


International

2013 – 2020 Facilitator/Host Future Nurse’s Club

2013 – 2016 Member, Pharmacology Task Force

UNIVERSITY SERVICE - MSU

2023 Facilitator/Host MSU Grandparents University

2023 Facilitator/Host MSU GATE Program

2022 – 2023 Member, Standardized Patient Committee

2018 – 2019 Member, MSU Library Committee

2016 – 2019 Facilitator/Host MSU Science Festival

2016 – 2019 Facilitator/Host MSU Grandparents University

2016 – 2019 Facilitator/Host MSU GATE Program


OTHER SERVICE

2023 Content expert reviewer, SIDM Project - simulation


activity aligned with SIDM diagnostic reasoning
competencies as described in the Consensus Curriculum:
Individual Competencies (Margaret Perlia Bavis, DNP,
APRN, FNP-BC)

2021 Facilitator, INACSL Simulation Education Program

2015 Reviewer, Staff Educator’s Guide to Professional


Development. Doody’s Review Service (on-line).
Available: http://www.doody.com

2015 Reviewer, Sandra Smith’s Review for NCLEX-RN, 13th


Edition. Doody’s Review Service (on-line). Available:
http://www.doody.com

TEACHING

Michigan State University

2013-2023 NUR 205, 323, 337, 371, 434, 438, 471: Several courses;
Instructor, Taught alone and co-taught with TA’s;
Clinical/Lab

Spring 2018 NUR 324: Health Promotion, (8 Students); Instructor;


Taught alone; Clinical

Summer 2016 NUR 460: Leadership, (8 students); Instructor; Taught


alone; Clinical

OTHER

Professional Development/Continuing Education/Trainings/Certificates

2017 TeamSTEPPS Master Trainer Certification

Podcasts, Interviews, News Stories, Other Endeavors

02/2020 Everson, B. (Producer). (2020, February 25). Teaching,


Learning, and Everything Else [Audio Podcast].
https://cat.xula.edu/food/conversation92/?mc_cid=d7391b6
615&mc_eid=59451c0a0c

You might also like