Download as pdf or txt
Download as pdf or txt
You are on page 1of 355

Walden University

ScholarWorks
Walden Dissertations and Doctoral Studies
Walden Dissertations and Doctoral Studies
Collection

2019

Teachers' Formative Assessment Use to Check for


Understanding and to Adjust Instruction
Bobbi Jo Kenyon
Walden University

Follow this and additional works at: https://scholarworks.waldenu.edu/dissertations


Part of the Elementary and Middle and Secondary Education Administration Commons, and the
Secondary Education and Teaching Commons

This Dissertation is brought to you for free and open access by the Walden Dissertations and Doctoral Studies Collection at ScholarWorks. It has been
accepted for inclusion in Walden Dissertations and Doctoral Studies by an authorized administrator of ScholarWorks. For more information, please
contact [email protected].
Walden University

College of Education

This is to certify that the doctoral study by

Bobbi Jo Kenyon

has been found to be complete and satisfactory in all respects,


and that any and all revisions required by
the review committee have been made.

Review Committee
Dr. Susan Koyzis, Committee Chairperson, Education Faculty
Dr. Billie Andersson, Committee Member, Education Faculty
Dr. Mary Howe, University Reviewer, Education Faculty

Chief Academic Officer


Eric Riedel, Ph.D.

Walden University
2019
Abstract

Teachers’ Formative Assessment Use to Check for Understanding

and to Adjust Instruction

by

Bobbi Jo Kenyon

MA, Grand Valley State University, 2002

BS, Central Michigan University, 1995

Doctoral Study Submitted in Partial Fulfillment

of the Requirements for the Degree of

Doctor of Education

Walden University

February 2019
Abstract

School leaders at an urban high school in the U.S. Midwest encouraged teachers to use

formative assessment to help students meet learning goals; however, several years later,

they found inconsistent implementation. Without a clear understanding of teachers’

formative assessment practices, leaders could not establish needed supports for its

consistent use in the classrooms. The purpose of this bounded qualitative case study was

to examine teachers’ formative assessment use to check for student understanding and to

adjust instruction. Black and Wiliam’s formative assessment theory formed the

foundation of this study. Research questions focused on teachers’ perceptions of

formative assessment and usage of formative assessment for instruction. Ten state

certified high school teachers, who had at least a bachelor’s degree, passed basic skills

and subject area examinations, and taught within their majors or minors, were

purposefully selected to provide data. Data were gathered from observations, interviews,

and teacher logs and were analyzed inductively using open and axial coding strategies.

Results showed teachers collected and used formative assessment to modify instruction

and determine student understanding from a limited number of students. Furthermore,

they lacked the knowledge, skills, and strategies to implement formative assessment to

help all students meet learning goals. Based on the findings, 3 professional development

(PD) sessions were created to help school leaders provide support for teachers’ consistent

formative assessment implementation. These endeavors may contribute to positive social

change when administrators provide teachers with PD to increase teachers’ knowledge

and skills using formative assessment, and, ultimately, to meet student learning goals.
Teachers’ Formative Assessment Use to Check for Understanding

and to Adjust Instruction

by

Bobbi Jo Kenyon

MA, Grand Valley State University, 2002

BS, Central Michigan University, 1995

Doctoral Study Submitted in Partial Fulfillment

of the Requirements for the Degree of

Doctor of Education

Walden University

February 2019
Dedication

I would like to dedicate this project study to my father, in whose footsteps I

followed. Witnessing his passion for science and teaching inspired me from an early age.

Collecting rocks and insects, gazing through a telescope, caring for animals, naming tree

species, dissecting specimens after school, talking about natural wonders, and watching

Carl Sagan’s Cosmos and Marty Stouffer’s Wild America—they all fueled my love for

science. Setting up labs, visiting the classroom, grading lab reports, watching late-night

lesson planning, listening to countless teaching stories, and hearing the appreciation of

former students—they all fueled my love for teaching. I have been so blessed that my

father’s passion was also my passion and that we could share our love for science and

teaching with one another.

I would also like to dedicate this project study to my mother. From teaching me

at my little green desk, such as all the bones of the body at age five, to the countless

hours quizzing me for tests as we sat on her bed—she taught me to work hard and to love

learning. More importantly, she instilled in me the drive to do my best. She hung two

signs prominently in our house: “Anything worth doing at all, is worth doing well” from

Hunter Thompson and “If you think you can, or you think you can’t, you’re absolutely

right” by Henry Ford. Those simple quotes had a profound effect on my life. She taught

me that anything I put my mind to, with my best effort, I could achieve. That sentiment

has brought me much accomplishments and fulfillment in my life. Now it has resulted in

earning my doctoral degree. I have also been blessed that you taught me character is the

true measure of success in life.


Acknowledgments

I would like to first acknowledge my significant other, Lance, who has given me

unwavering love and support throughout my years of teaching, my endeavors as

Michigan Teacher of the Year, and my countless hours of doctoral work. You always

remind me to believe in myself when I face challenges and take on new roles. Having

you by my side during these journeys has made all the difference. I also want to

acknowledge my sister Sara Jo's constant love and support. Despite the distance, you

always stay involved in my life, encouraging me and celebrating with me along the way.

I would like to thank several other people who have helped me reach this point in

my academic career. My committee chair, Dr. Susan Koyzis, who has helped me grow as

a scholar through thoughtful feedback and meaningful guidance. I am thoroughly

grateful for all the time and support you have dedicated as I completed this project study.

Thank you to Dr. Billie Andersson and Dr. Howe for your support and feedback and to

my Walden University professors for their focus on academic excellence and dedication

to positive social change. Thank you also to my former principal, R. Lewis, who

recognized my leadership potential and encouraged me to move out of my comfort zone.

You helped me realize that I could make a difference beyond my classroom.

Most importantly, I want to thank God for providing me with wonderful

opportunities in life, giving me the courage to follow those opportunities, and walking

with me down all the paths I have chosen to follow.


Table of Contents

List of Tables ..................................................................................................................... vi

Section 1: The Problem ........................................................................................................1

Introduction ....................................................................................................................1

Background ....................................................................................................................2

The Local Problem .........................................................................................................3

Rationale ........................................................................................................................4

Local Evidence of the Problem ............................................................................... 4

Evidence of the Problem from Literature ............................................................... 7

Definition of Terms......................................................................................................11

Significance of the Study .............................................................................................14

Research Question(s) ...................................................................................................17

Review of the Literature ..............................................................................................18

Framework ............................................................................................................ 19

Brief History of Formative Assessment ................................................................ 23

Understanding Formative Assessment .................................................................. 25

Formative Assessment Versus Summative Assessment ....................................... 31

Formative Assessment and Student Achievement ................................................ 35

Checking for Understanding ................................................................................. 39

Adjusting Instruction ............................................................................................ 53

Implications..................................................................................................................57

Summary ......................................................................................................................59
i
Section 2: The Methodology..............................................................................................61

Research Design and Approach ...................................................................................61

Participants ...................................................................................................................65

Participant Selection and Access .......................................................................... 66

Ethical Protection of Participants.......................................................................... 70

Data Collection ............................................................................................................71

Justification of Data Choices ................................................................................ 71

Direct Observations .............................................................................................. 72

Interviews .............................................................................................................. 75

Logs ................................................................................................................... 78

Researcher’s Role ................................................................................................. 80

Data Analysis ...............................................................................................................81

Research Questions ............................................................................................... 82

Coding ................................................................................................................... 84

Accuracy and Credibility of Findings ................................................................... 85

Discrepant Data ..................................................................................................... 86

Data Analysis Results ..................................................................................................89

Theme 1: Implementation ..................................................................................... 90

Theme 2: The Feedback Cycle ........................................................................... 102

Theme 3: Knowledge and Beliefs ....................................................................... 111

Theme 4: Barriers and Supports ......................................................................... 116

Interpretation of the Findings.....................................................................................124


ii
Summary ....................................................................................................................135

Section 3: The Project ......................................................................................................138

Introduction ................................................................................................................138

Project Goals ..............................................................................................................139

Project Goal 1 ..................................................................................................... 139

Project Goal 2 ..................................................................................................... 140

Project Goal 3 ..................................................................................................... 140

Rationale ....................................................................................................................141

Review of the Literature ............................................................................................144

Professional Development .................................................................................. 145

Professional Learning Communities ................................................................... 152

Collecting Student Feedback During Formative Assessment ............................. 157

Summary ....................................................................................................................174

Project Description.....................................................................................................176

Existing Supports and Resources Needed........................................................... 176

Potential Barriers and Possible Solutions ........................................................... 179

Proposal for Implementation and Timetable....................................................... 180

Roles and Responsibilities .................................................................................. 197

Project Evaluation Plan ..............................................................................................198

Implications Including Social Change .......................................................................205

Local Stakeholders .............................................................................................. 205

Larger Context .................................................................................................... 206


iii
Conclusion .................................................................................................................207

Section 4: Reflections and Conclusions...........................................................................209

Introduction ................................................................................................................209

Project Strengths and Limitations ..............................................................................209

Project Strengths ................................................................................................. 209

Project Limitations .............................................................................................. 211

Recommendations for Alternative Approaches .........................................................213

Scholarship .................................................................................................................214

Project Development and Evaluation.........................................................................215

Leadership and Change ..............................................................................................217

Reflection of Self as Scholar......................................................................................219

Reflection of Self as Practitioner ...............................................................................221

Reflection of Self as Project Developer .....................................................................222

Reflection on Importance of the Work ......................................................................224

Implications, Applications, and Directions for Future Research ...............................226

Conclusion .................................................................................................................229

References ........................................................................................................................231

Appendix A: Project Study ..............................................................................................255

Appendix B: Participant Demographics ..........................................................................329

Appendix C: Observation Protocol ..................................................................................330

Appendix D: Interview Protocol ......................................................................................332

Appendix E: Teacher Classroom Formative Assessment Log ........................................334


iv
Appendix F: List of Possible Formative Assessment Strategies .....................................335

Appendix G: Whole Group OTR Strategies by Category ...............................................336

v
List of Tables

Table 1. Inductively Developed Thematic Categories.......................................................92

vi
1
Section 1: The Problem

Introduction

Formative assessment has been a widely discussed and well-researched practice

since its introduction to the educational field through the research of Black and Wiliam

(1998a). The main benefit of formative assessment is that its consistent use has been

shown to increase student achievement by providing teachers with evidence of students’

current understanding so that teachers can help students reach intended learning goals

(Duckor, 2014; Tomlinson, 2014). In fact, Wiliam (2013) stated that formative

assessment is "one of the most powerful ways of improving student achievement” (p. 15).

Formative assessment and student achievement are related because the former can

uncover what students do not understand during the learning process (Fisher & Frey,

2014a). Teachers can use information gathered from formative assessment tasks to

address student misunderstandings by modifying their instruction (Miranda & Hermann,

2015). Researchers have found that formative assessment is used consistently by

effective teachers and urban school districts with high student achievement (Johnson,

Uline, & Perez, 2013), and it has been shown to be particularly beneficial for low

achievers (Black & Wiliam, 1998a; Hanover Research, 2014). However, studies have

shown that most teachers do not use this research-based practice regularly to check for

student understanding of concepts (Wylie & Lyon, 2015) and, equally crucial, do not use

the results to modify their instruction (Trumbull & Lash, 2013). If teachers do not

consistently check and address student understanding, then students may not meet

learning goals and student achievement-related issues may prevail.


2
Background

I conducted this study at Hammond High School (pseudonym), one of three high

schools located in a large urban district in the northern Midwest United States. The

school consisted of a population of around 650 students who attended Grades 9 through

12. Of the students Hammond served, 73% were African-American, 14% were Hispanic,

and 8% were White; overall, 80% of the students were classified as economically

disadvantaged (Great Schools Dashboard, 2016). Lack of student understanding of

curricular concepts, as measured by not meeting learning goals derived from state and

district standards, had been a contributing factor to many student achievement-related

issues at this school. During the 2015-2016 school year, 40% of students failed at least

one class; of these students, 35% failed two or more classes, and 29% failed three or

more classes. According to the school data specialist, this resulted in 138 students in the

ninth through eleventh grades not earning enough credits to progress to the next grade

level. Additionally, student achievement on state standardized tests were consistently

some of the lowest in the state. The state standardized test, taken in 11th grade in all core

subjects, showed students with a 9.8% proficiency compared to the state average of

32.6%. Consequently, graduation rates suffered, with only 56.2% of students graduating

in 4 years.

The student achievement data for previous school years were similar to the 2015-

2016 data. Due to consistently low student achievement levels and students not meeting

state learning goals, Hammond was placed in the bottom 5% of the state in the top-to-

bottom ranking. This classification, along with district school improvement


3
requirements, spurred leaders at Hammond to decide upon several research-based

practices that they wanted to encourage teachers to use in their classrooms to help

students meet learning goals and to positively influence student achievement. According

to the school data specialist, school leaders chose formative assessment as one of the

instructional practices to implement because research in the larger educational setting has

shown that teachers’ use of formative assessment in the classroom can positively

influence student achievement (Andersson & Palm, 2017; Baird, Hopfenbeck, Newton,

Stobart, & Steen-Utheim, 2014; Cornelius, 2014; Filsecker & Kerres, 2012; Hattie, 2012,

Hudesman et al., 2013; Madison-Harris & Muoneke, 2012; Yin et al., 2013).

The Local Problem

A school administrator reported that to help address student achievement-related

issues at Hammond, school leaders have encouraged teachers for the past several years to

use formative assessment to confirm that students understood the posted learning targets

and to modify instruction as needed to address any misunderstandings. Despite the

encouragement to use formative assessment, however, an instructional leader at the

school reported that there was a lack of consistent use of this instructional practice by

teachers at Hammond to check for student understanding and to adjust instruction so

students could meet learning goals. Thus, a gap in practice existed between what

research-based literature has shown to be an effective method to increase student

achievement and the current teacher practices regarding formative assessment use at the

school. To rectify the gap between what literature has shown to be an effective way to

increase student achievement and the formative assessment practices of teachers at


4
Hammond, it was important for school leaders to have clear information about how

teachers used formative assessment in their classrooms. Only then could school leaders

establish the supports needed to promote the consistent use of formative assessment in

the locality.

Rationale

Local Evidence of the Problem

Leaders at Hammond High School were concerned about the lack of consistent

use of formative assessment to check for student understanding and to adjust instruction.

Despite encouraging teachers to use formative assessment, local achievement data

remained low. The local data showing low student achievement, along with literature

revealing a connection between increased academic performance and appropriate use of

formative assessment, suggested an inconsistent use of this instructional method at

Hammond. An instructional leader at Hammond commented that there was concern

among administrators that teachers’ inconsistent use of formative assessment to check for

understanding may have played a role in students not meeting state and district learning

goals, which, consequently, may contribute to the school's continued low student

achievement levels. Another instructional leader mentioned that from periodic classroom

observations conducted throughout the year, there was “a noticeable variation between

formative assessment use among teachers at our school.” School leaders also questioned

teacher adeptness at using formative assessment feedback from students to adjust their

instruction to help students meet learning goals. The former school data specialist stated

that leaders did not understand how teachers used information from formative assessment
5
to adjust their instruction so that they could address student misunderstandings.

Classroom and school achievement data suggested that reteaching based on formative

assessment results was inconsistent as well. Without clear information about the use of

these two components of the formative assessment process—checking for understanding

and adjusting instruction—school leaders could not make informed decisions regarding

how to support teachers’ consistent use of this research-based practice.

To support the process of helping students understand curricular concepts and

successfully meet learning goals, school leaders must have information about teacher

formative assessment practices in their buildings (Sanzo, Myran, & Caggiano, 2015).

Stanley and Alig (2014) determined that if leaders were informed and supportive when

overseeing implementation of formative assessment in their schools, student achievement

increased. Examining how teachers use formative assessment practices in the classroom

can be the basis for deciding what needs to be done to help improve those practices (Box,

Skoog, & Dabbs, 2015). Therefore, school leaders must have information about what is

happening in their schools regarding formative assessment use to determine what areas of

support should be targeted; without this information, their “efforts may lack focus and

direction” (Sanzo et al., 2015, p. 49).

Local school leaders’ concerns about lack of consistent formative assessment use

at Hammond were heightened after an external review was conducted at the end of the

school year. The survey revealed that “teaching and assessing for learning,” which

included questions relating to formative assessment, was an area that showed one of the

lowest ratings from parents (2015 survey, available as internal document). Because of
6
the low rating, school leaders added to the local school improvement plan a need for

helping all external stakeholders understand practices regarding teaching and assessing

learning at Hammond, such as how teachers check for student understanding so that they

can address misunderstandings. Having clear information about formative assessment

practices in the classrooms may help school leaders address any stakeholder concerns

about teachers’ use of assessments to help students reach learning goals (Moss,

Brookhart, & Long, 2013).

To gain more information about teacher instructional practices, leaders at

Hammond examined local school data from a student survey conducted by TRIPOD, a

school improvement company that collects and reports on student perspectives about

teaching and learning. From a sample of 428 Hammond students, TRIPOD found that

52% of the students taking the online survey marked true for the following statement,

"My teacher often thinks I understand when I really don't" (TRIPOD, 2016). One

interpretation of the TRIPOD respondent data could be that the information demonstrates

a problem at Hammond regarding consistency in the use of formative assessment in the

classroom to check for student understanding. Another interpretation could simply be

that students did not understand some of the instructional methods their teachers used to

assess their understanding, or they did not realize when teachers were implementing these

methods. The former explanation reflects a national problem where researchers have

found that teachers either do not check for understanding or that they do so ineffectively

or inconsistently (Fisher & Frey, 2014a; Havnes, Smith, Dysthe, & Ludvigsen, 2012),

and teachers often do not know how to adapt instruction based on the results of checking
7
for student understanding (Miranda & Hermann, 2015; Wood, Turner, Civil, & Eli,

2016).

The local school data showing poor academic performance, classroom

observations by administration, and the results of local survey data from students and

parents indicated an inconsistent use of formative assessment at Hammond, which

warranted investigation. With a deeper understanding of teachers’ formative assessment

practices, school leaders may provide necessary instructional supports to ensure regular

use of formative assessment. Over time, with proper supports in place, teachers’

consistent use of formative assessment at Hammond may help increase student learning

in the classroom. With a deeper understanding of curricular concepts, students can meet

state and district learning goals which may help improve overall student achievement

levels. Therefore, the purpose of this qualitative case study was to examine how teachers

used formative assessment to check for student understanding and to adjust instruction so

that leaders could make informed decisions to support the consistent use of this research-

based practice at Hammond.

Evidence of the Problem from Literature

Formative assessment is not a trend that simply concerns Hammond. Rather, it

has concerned educators from its formal introduction into the profession by Black and

Wiliam (1998b). Formative assessment is a process in which classroom tasks, planned or

unplanned, are used regularly during the learning process to provide feedback about

students’ current levels of understanding so that teaching and learning can be modified to

address any gaps in learning and to improve student achievement (Black & Wiliam,
8
1998b; CCSSO, 2008; Chappuis, 2015; Clark, 2012b; Stiggins & Dufour, 2009). The

formative assessment process is often misunderstood and inconsistently defined among

educators; this has contributed significant confusion as to what exactly formative

assessment looks like in practice (Havnes et al., 2012).

Formative assessment is a noteworthy, research-based practice that can influence

both teaching and learning. Since its introduction, studies have shown that student

achievement can be linked directly to teacher use of formative assessment to check for

and to address student understanding (Andersson & Palm, 2017; Baird et al., 2014;

Conderman & Hedin, 2012; Cornelius, 2014; Filsecker & Kerres, 2012; Hattie 2012;

Hudesman et al., 2013; Madison-Harris & Muoneke, 2012; Yin, Tomita, & Shavelson,

2013). However, despite the body of research regarding the benefits of formative

assessment on student achievement, concerns about the manner and efficacy of teacher

use of this research-based strategy remain. Since Black and Wiliam’s (1998b) extensive

review of formative assessment practices, teachers have been encouraged to use

formative assessment to improve student learning in their classrooms (Popham, 2013).

Despite the popularity of formative assessment as a sound instructional practice, Herman

(2013) found that the research-based instructional practice “remains an elusive concept”

(p. 2). Several studies across the nation have shown that even when teachers use

formative assessments, they are often not implementing them as fully as possible (Wylie

& Lyon, 2015) to ensure students understand the concepts delivered in the classroom

(Earl, 2013). Likewise, studies have repeatedly shown that formative assessment is not

used, or is only superficially used, in most classrooms (Popham, 2014). Box et al. (2015)
9
declared that despite efforts of even large-scale institutions such as the Educational

Testing Service (ETS), the National Science Teachers Association (NSTA), the National

Academies, and the National Research Council to promote use of formative assessment

in education, “Formative assessment practices have not been heartily embraced by the

nation’s teachers” (p. 2). Furthermore, factors that may impede teachers’ use of

formative assessment are not clear (Heitink, Van der Kleij, Veldkamp, Schildkamp, &

Kippers, 2016).

The circumstances and strategies used by teachers to implement formative

assessment are also not well known (Sach, 2015). Some studies have found that many

teachers are not only using formative assessment inconsistently, but that they also are not

using it accurately (Earl, 2013). Several researchers have found that there seems to be a

lack of understanding about what is meant by formative assessment. Some practices that

teachers may believe are formative assessment, such as quizzes and unit tests that are

graded, may not follow the processes prescribed by and defined in the research (Clark,

2012a; OECD, 2013; Sztajn, Confrey, Wilson, & Edington, 2012). In addition, many

teachers have not received instruction on how to use formative assessment in the

classroom (Curry, Mwavita, Holter, & Harris, 2016; DeLuca & Bellara, 2013; Dunn,

Airola, Lo, & Garrison, 2012; Mandinach & Gummer, 2013). Factors such as the

misunderstanding of what is meant by formative assessment and the lack of training for

teachers contribute to the widespread problem about the consistent implementation of

formative assessment in schools. To address the problem, it is important to find out

exactly how teachers implement formative assessment; only then can consistency of use
10
be developed (Duckor, 2014). By having more information about how teachers use

formative assessment, school leaders can gain insight into how they can support teachers’

use of this research-based instructional practice.

As I noted in the introduction, formative assessment is used consistently and

accurately by effective teachers and in high-achieving urban schools (Johnson et al.,

2013). Successful teachers’ practices include daily monitoring of student understanding

in the classroom to recognize where students are in their learning and adjustment of

instruction accordingly. A focus on increasing student understanding through the

implementation of formative assessment strategies in the classroom may help improve

student achievement because students may understand the concepts more fully. School

leaders cannot afford to be uninformed about their teachers’ formative assessment use in

light of the large body of research showing the importance of its use in the classroom to

influence student achievement (Andersson & Palm, 2017; Baird et al., 2014; Cornelius,

2014; Hattie, 2012; Hudesman et al., 2013; Madison-Harris & Muoneke, 2012; Yin et al.,

2013). Therefore, if leaders throughout the field of education want to address issues in

their schools connected to student achievement, then they should have clear information

about how teachers use formative assessment to support consistent and accurate use of

this practice.
11
Definition of Terms

I have provided the following key terms and their corresponding definitions to

clarify their use within this study:

Assessment: A tool, task, or method that is used to inform educators about student

learning. Assessment can take the form of teacher questioning, teacher-developed tasks

or tests, high-stakes tests, student portfolios, projects, or performance tasks (Supovitz,

2012).

Assessment for learning: Another term often used for formative assessment (Van

der Kleij, Vermeulen, Schildkamp, & Eggen, 2015; Wiliam, 2013).

Convergent questioning: Asking questions that are primarily used for factual

recall (Jiang, 2014); also known as eliciting low-level thinking or close-ended responses.

Divergent questioning: Asking questions that encourage diverse responses (Jiang,

2014); also known as eliciting high-level thinking or open-ended responses.

Exit slip: A formative assessment in which students write their answer to a

question at the end of the lesson and submit it to the teacher when leaving the classroom;

teachers adjust instruction for the next lesson based on student responses (Andersson &

Palm, 2017). Exit slips are also known as exit tickets or exit passes.

Formative feedback: Information a teacher receives about student understanding

as a result of student responses to a formative assessment (Popham, 2013).

Formal formative assessment: Formative assessment that is planned in advance of

a lesson to gather information about student understanding during instruction (Chappuis,

2015).
12
Formative assessment: A process in which classroom tasks, planned or

unplanned, are used regularly during the learning process to provide feedback about

students’ current levels of understanding so that teaching and learning can be modified to

address any gaps in learning and to improve student achievement (Black & Wiliam,

1998b; CCSSO, 2008; Chappuis, 2015; Clark, 2012b; Stiggins & Dufour, 2009).

Formative assessment strategy: An activity or instructional tool that is used by

teachers to give students an opportunity to demonstrate their thinking and to collect

information about student understanding (Kang, Thompson, & Windschitl, 2014).

Formative assessment task: Any activity students participate in to demonstrate

their understanding of curricular learning goals (Kang et al., 2014).

Formative questioning: Asking questions to check for student understanding;

teachers evaluate student responses to formative questions to help make instructional

decisions to improve learning (Jiang, 2014).

Guided instruction: A teacher’s “strategic use of questions, prompts, or cues

designed to facilitate student thinking” (Fisher & Frey, 2014a, p. 13). The process should

involve feedback from formative assessment tasks that check for student understanding.

Informal formative assessment: Formative assessment that is not planned, it is

created on-the-fly or in the spur-of-the-moment when teachers want to gather information

about student understanding during instruction (Chappuis, 2015).

Initiate-response-evaluate (IRE): A model of questioning where the teacher asks a

formative question, a student or several students answer, and the teacher gives feedback
13
on whether the answer was correct or incorrect (Duckor, 2014; Pearsall, 2018; Wiliam,

2014).

Opportunity to respond (OTR): Instructional strategies that encourage

participation from all students to help teachers quickly reveal what students understand

during formative assessment and if any immediate instructional adjustments should be

made to facilitate learning (Menzies, Lane, & Oakes, 2017).

Professional learning communities (PLCs): “Professional learning that increases

educator effectiveness and results for all students occurs within learning communities

committed to continuous improvement, collective responsibility, and goal alignment”

(Learning Forward, 2017, para.1).

Scaffolding: Instructional “support provided during the teaching and learning

process, tailored to the individual’s needs (and ZPD) and may take the form of such

things as modeling, coaching, prompting, key questions, and other forms of feedback”

(Herman, 2013, p. 13).

Student feedback: Information about students’ current levels of understanding that

a teacher can use to make instructional decisions (Popham, 2013).

Summative assessment: Assessment used for the purpose of measuring student

achievement after a period of learning. This type of assessment is often used for

accountability purposes (Linquanti, 2014).

Warm-up: A formative assessment in which students write their answers to

questions at the beginning of class; teachers use the feedback to determine the current

level of student understanding and to adjust instruction based on student responses


14
(Conderman & Hedin, 2012). Warm-ups are also known as do-nows, starters, bell-

ringers, kick-offs, admit slips, and entrance slips.

Zone of proximal development (ZPD): “The developmental space between the

level at which a student can handle a problem or complete a task independently and the

level at which the student can handle or complete the same task with assistance from a

more competent other, such as a teacher” (Trumbull & Lash, 2013, p. 5; Vygotsky,

1978).

Significance of the Study

Decades of research have shown that using formative assessment can positively

influence student achievement (Baird et al., 2014; Black & Wiliam, 1998b; Cornelius,

2014; Filsecker & Kerres, 2012; Hudesman et al., 2013; Madison-Harris & Muoneke,

2012). Studies have also shown that student achievement may be improved in schools

where teachers use this research-based strategy appropriately (Ali & Iqbal, 2013;

Andersson & Palm, 2017; Hattie, 2012; Mehmood, Hussain, Khalid, & Azam, 2012; Yin

et al., 2013). In an era of increased accountability, educators must properly implement

highly effective practices such as formative assessment (Chan, Konrad, Gonzalez, Peters,

& Ressa, 2014). School leaders play an important role in teachers’ appropriate use of

formative assessment practices (Stanley & Alig, 2014). With better understanding of

how teachers use formative assessment to check for understanding and to adjust

instruction, school leaders can make better decisions as to what instructional and

administrative supports are needed to ensure its consistent implementation. Because of

the vast number of studies showing a connection between teacher formative assessment
15
use and student achievement (Andersson & Palm, 2017; Baird et al., 2014; Cornelius,

2014; Hattie, 2012; Hudesman et al., 2013; Madison-Harris & Muoneke, 2012; Yin et al.,

2013), understanding and supporting formative assessment use in schools is essential.

Therefore, a study designed to understand teachers’ formative assessment use can be

beneficial to both a local school setting and the educational profession.

School leaders may use formative assessment information from this study to

develop appropriate professional development and support systems for teachers to

encourage consistency and fidelity of the use of this instructional strategy. The

information resulting from this study may also show a need for continued monitoring of

formative assessment practices. As Fisher and Frey (2014a) stated, having accurate

information about formative assessment use in a school is essential for helping leaders

create an appropriate climate for promoting and sustaining this practice. Being informed

about teacher formative assessment use will also allow Hammonds’ leaders to present

greater transparency when addressing community stakeholder concerns about teaching

and assessing practices within the school and to show how formative assessment is being

used to help students meet learning goals.

Teachers at Hammond may also benefit from the information about formative

assessment use that this study offers. With support from school leaders, teachers may

check for student understanding and adjust their instruction to address any

misunderstandings they uncover with more fidelity. These instructional practices are

important because studies have shown that when teachers are appropriately using

formative assessment, student achievement increases (Ali & Iqbal, 2013; Andersson &
16
Palm, 2017; Hattie, 2012; Madison-Harris & Muoneke, 2012). Student achievement is

not only significant to school leaders, but with increasingly rigorous teacher evaluation

systems across the nation that include student achievement, it is also of growing

importance to teachers. Popham (2013) even advised, “The higher the stakes associated

with a given teacher evaluation system, the greater should be a teacher’s interest in

becoming a skilled user of formative assessment” (p. 13). Research distinctly has shown

that teachers who use formative assessment are more likely evaluated as instructionally

effective (Popham, 2013; Stiggins, 2014). Because many teacher evaluation tools, such

as the Danielson Framework (2007) used at Hammond, contain rubrics about the extent

formative assessment practices are used to uncover and address student understanding,

having support for learning how to effectively implement formative assessment is

essential (Wylie & Lyon, 2015).

Consistent implementation of formative assessment may also help improve a

student’s ability to meet learning goals. Yin et al. (2013) declared that formative

assessment use in the classroom could result in an increased student understanding of

curricular concepts taught in class. An increased understanding of the concepts may help

students pass more classes at Hammond and therefore earn the necessary credits to move

to the next grade level and to graduate on time. Research has also shown that

achievement on high-stakes assessments can be directly linked to teachers’ use of

formative assessment in the classroom (Conderman & Hedin, 2012; Curry et al., 2016).

Therefore, if formative assessment leads to better understanding of the district curricular

learning goals, then students may also improve on state assessments. Taken together,
17
these potential results for school leaders, teachers, and students may lead to positive

social change by helping all stakeholders gain the necessary information about teacher

formative assessment use to increase student understanding and potentially raise student

achievement levels at Hammond so that students can be better prepared for the future.

Research Question(s)

School leaders at Hammond were concerned with the lack of consistent use of

formative assessment and needed to better understand how teachers used this

instructional practice so they could address this issue. Teacher data and insights needed

to be gathered that would help leaders at Hammond gain an understanding of formative

assessment use in their school to support consistent implementation of this research-

based practice. I conducted a qualitative case study that concentrated on the manner and

degree that teachers used formative assessment in classrooms to check for student

understanding and to adjust instruction. Formative assessment, aimed to help students

meet learning goals as an attempt to improve student achievement, is grounded in the

formative assessment theory (Black & Wiliam, 1998b) that I will discuss in the next

section.

I developed the following questions as the basis for this study to gather

information about teacher formative assessment use in the classroom:

RQ1: How do teachers use formative assessment to check for student

understanding of state and district learning goals?

RQ2: How do teachers use student feedback collected during formative

assessment to adjust their instruction?


18
RQ3: What are teachers’ perceptions of formative assessment to check for

understanding and to adjust instruction?

Review of the Literature

This literature review consists of research about formative assessment from

professional journal articles, conference papers, government publications, books, seminal

works, and collegial communications. I found research articles and publications by

searching the following databases through university library resources and online

research databases: Academic Search Complete, Education Source, ERIC, ProQuest

Central, SAGE Premier, Science Direct, ResearchGate, Taylor and Francis Online, and

Teacher Reference Center. Most searches were limited to peer-reviewed research

conducted within the past 5 years from 2013-2018. However, I used older literature

(1968-2012) to establish historical perspective on formative assessment work. The

research was analyzed and divided into the following topics: a brief history of formative

assessment, formative assessment defined, formative assessment versus summative

assessment, formative assessment and student achievement, and the two formative

practices that are at the center of this study—checking for understanding and adjusting

instruction. Subtopics within the two formative assessment practices include formatively

assessing all students, appropriate formative questioning, convergent and divergent

questioning, frequency of checking for understanding, formative assessment tasks, using

information from formative assessment, and making instructional decisions. I used the

following search terms in the databases to find research pertinent to the topics of this

study: formative assessment, formative assessment theory, assessment for learning,


19
checking for understanding, adjusting instruction, formative feedback, instructional

decisions, formative assessment and student achievement, formative assessment

implementation, formative questioning, formative assessment strategies, and summative

assessment. Close to 100 articles met the criteria for inclusion in this literature review.

Framework

This study was informed by the work of Black and Wiliam (1998b) who laid the

foundation for the formative assessment theory, which is the idea that student

understanding and learning can be intentionally enhanced with regular classroom

assessment, feedback, and instructional adjustments. Black and Wiliam (2009) declared

that formative assessment provides information to students and teachers during the

learning process about how well students are progressing toward intended learning goals.

Black and Wiliam (1998b) realized the importance of the connection between

discovering what students know during formative assessment and the need for teachers to

adjust their instruction accordingly. They argued that an assessment becomes formative

only when the information gathered from the assessment is used to modify classroom

instruction to address student learning needs (Black & Wiliam, 1998b). Black and

Wiliam (2009) insisted that teachers must understand formative assessment well to use it

to help identify gaps between students’ current understanding and the desired learning.

Teachers can then make decisions as to what instructional strategies they can use to help

students close such gaps.

Black and Wiliam’s (1998b) theory of formative assessment is based on the social

development theory, which is grounded in constructivism (Clark, 2012b; Shepard, 2008).


20
This theory states that students actively develop knowledge and understanding over time

in an interactive social learning context guided by a teacher (Vygotsky, 1978). Students

and teachers interact with one another during the formative assessment process. The

teachers monitor learning through dialogue with students, and students learn from each

other, from the teacher’s feedback, and from instructional supports (Torrance, 2012).

Formative assessment, therefore, is “more than a checklist of qualities or a collection of

activities. Rather, it’s made up of a sequence of moves that invite a positive ongoing

relationship between teachers and their students” (Duckor, 2014, p. 28).

Although the student and the teacher both have roles in social learning, in this

study I focused on the role of the teacher. The social development theory (1978)

highlights the contributions of teachers who have already developed the needed skills and

knowledge to assist students in their learning (Piaget, 1954; Vygotsky, 1978). While

helping students with knowledge assimilation, teachers must recognize and address the

gaps between current student understanding and the intended learning goals. Formative

assessment theorists have found Vygotsky’s ZPD useful in understanding students’

current levels of understanding and their potential levels (Clark, 2015; Magno & Lizada,

2015; Sach, 2012; Sach, 2015; Trumbull & Lash, 2013). According to Trumbull and

Lash (2013):

The ZPD is the developmental space between the level at which a student can

handle a problem or complete a task independently and the level at which the

student can handle or complete the same task with assistance from a more

competent other, such as a teacher. (p. 5)


21
The ZPD can be used to show how learning gaps can be addressed by having the teacher

(referred to as a “more knowledgeable other” by Vygotsky) provide scaffolding (learning

supports) for students to reach intended and attainable learning goals (Crossouard &

Pryor, 2012; Heritage & Heritage, 2013; Vygotsky, 1978; Wiliam, 2009). The learning

gap “is eventually closed when the child starts to demonstrate skills and can accomplish

the assessment tasks” on his or her own (Magno & Lizada, 2015, p. 28). Therefore, the

ZPD and the purpose of formative assessment are well aligned, and by checking for

student understanding during the formative assessment process, teachers can determine a

student’s ZPD and what scaffolds are needed (Torrance, 2012; Vygotsky, 1978). After

gathering formative assessment feedback, teachers can decide if they need to modify their

instruction to meet the needs of the students. If formative assessment practices show that

students understand curricular concepts and “the relevant ZPD conceptual structure can

be met” (p. 187), then teaching and learning can move forward (Heritage & Heritage,

2013). If students do not understand, Heritage and Heritage (2013) advised

A student response that conveys an incomplete or fragmentary grasp of the

relevant ZPD structure must stimulate the teacher to take stock of the situation,

and make choices about the appropriate next step and how it may be implemented

in a cyclical pattern in which moving forward may involve, at least temporarily,

moving backward. (p. 187)

In other words, teachers may decide to reteach or re-explain concepts using scaffolds to

close the learning gap so that students can develop understanding. In short, Clark

(2012b) pointed out that the theory of formative assessment is based on the teacher
22
appropriately adjusting instruction to meet students at their current level of

understanding. This means formative assessment practices become an integral part of the

teaching and learning process.

Formative assessment can be conceptualized into key processes and roles that

allow for useful integration into classroom practice. Wiliam (2018) outlined three

processes to consider with formative assessment: where the students are at in their current

learning, where the students should be at in their learning, and what must be done to help

them get there. There are also three roles to consider within these processes: the teacher,

the student, and the peers (Wiliam, 2018). Even though students and peers have an

important part in the formative process, in this study, I focused specifically on the

teacher. The main role of the teacher in formative assessment, according to Heritage and

Heritage (2013), is to “elicit data that can inform the direction of learning during its

ongoing process” (p. 176). Specifically, the teacher gathers information on student

understanding, analyzes and interprets the data, and adjusts his or her instruction

accordingly (Chappuis, 2015). The research questions for this study, therefore, were

teacher-focused and designed to help me understand the teacher's role in determining

where students are at in their learning (checking for understanding) and how to help

students reach intended learning goals (adjusting instruction to address student

misunderstandings). In addition, I elicited teacher perceptions of formative assessment

through interviews to uncover their knowledge and use of this research-based practice.

The data gathered from the research questions will help address the lack of consistent use

of formative assessment at the local high school. The results might help leaders gain a
23
clearer picture of formative assessment use in their building. School leaders can then

determine what steps, if any, are needed to support consistent formative assessment

implementation in the classroom as a strategy to positively influence student achievement

by helping students reach intended learning goals.

Brief History of Formative Assessment

Assessment has long been a part of the educational landscape to measure the

achievement or abilities of students, but assessment diverged into two categories with

different roles in the late sixties—summative and formative. The terms summative and

formative, first introduced by Scriven in 1967, describe two types of evaluations that can

be used to measure the quality of curricular programs. Scriven (1967) used the two terms

to denote distinctions in the purposes of collecting curricular information, whether the

information is used to determine if the implemented program has met its intended goals

(summative) or if it is used to contribute to improving a program during its planning or

implementation (formative). Two years later, Benjamin Bloom (1969) suggested that

summative and formative evaluations could be connected to teaching and learning and

began to delve into how formative evaluation processes could be used to assess student

learning. Bloom (1969) described his view of formative evaluation as “brief tests used

by teachers and students as aids in the learning process” and argued that “we see much

more effective use of formative evaluation if it is separated from the grading process and

used primarily as an aid to teaching” (p. 48). Hence, the idea of formative assessment as

a diagnostic tool influencing teacher instruction was formed; one that encouraged

teachers to assess learning as it was occurring, not afterward.


24
Researchers explored the concept of formative assessment in the decades that

followed as educators began to examine its potential role in instruction. Widespread

consideration of formative assessment use in the classroom, however, did not take place

until after the No Child Left Behind Act (NCLB) was enacted in 2001 (Popham, 2013).

NCLB called for increased accountability in schools by requiring educators to administer

standardized tests to students yearly, and to regularly show improvements in test scores.

These summative tests took place after student learning occurred and did not help

teachers assess and improve student learning throughout the year until it was virtually too

late. Because of NCLB, educators in the United States were “feverishly searching for

ways to boost student achievement so they could dodge NCLB’s negative sanctions;”

they soon started to “give serious attention” to implementing formative assessment in the

classroom (Popham, 2013, p. 11). Formative assessment is a collection of tasks and

strategies that give teachers a way to regularly gather information on student

understanding during the learning process so they can positively affect student

achievement (Stiggins & Dufour, 2009; Stiggins, 2014).

With the increased interest in classroom formative assessment, attention soon

focused on Black and Wiliam’s influential 1998 publication, “Inside the Black Box.”

After completing a meta-analysis of over 250 research articles on formative assessment,

Black and Wiliam (1998b) found this practice to be a powerful tool that could yield

significant learning gains. However, Black and Wiliam cautioned that significant work

still needed to be done for formative assessment to be effectively implemented in

classrooms. They made the following recommendations: (a) formative assessment work
25
will require significant changes in pedagogy and classroom practice; (b) assumptions

about what makes for effective learning must be revisited; (c) feedback between the

teacher and learner needs to be enhanced; and (d) for assessment to be formative, results

must be used to adjust teaching and learning. Black and Wiliam (1998a) made these

recommendations because they noted that teachers did not seem to understand or

implement formative assessment appropriately. Despite years of further research and

studies, the concern about teachers’ understanding and implementation of formative

assessment in classroom practice prevails today (Box et al., 2015; Earl, 2013; Popham,

2014; Wylie & Lyon, 2015).

Understanding Formative Assessment

Researchers have proposed many definitions of formative assessment over the

years to determine what makes an assessment formative. In fact, the wide range of

inconsistent definitions may be one of the reasons behind the misunderstanding of

formative assessment and its ineffective use in the classroom (Filsecker & Kerres, 2012;

Havnes et al., 2012). Studies have shown that when teachers do not understand what

components make an assessment formative, they do not successfully implement

formative assessment with their students (Clark, 2012a; OECD, 2013; Sztajn et al.,

2012). Because of the complexity and the confusion surrounding formative assessment, I

examined its multifaceted definitions. Understanding how researchers defined formative

assessment helped clarify the main characteristics that were important to its

implementation and aided in the development of themes during data analysis.


26
Black and Wiliam (1998a) developed one of the first formal definitions of

formative assessment. Their research described formative assessment as “all those

activities undertaken by teachers, and/or by their students, which provide information to

be used as feedback to modify the teaching and learning activities in which they are

engaged” (p. 7). Black and Wiliam (1998b) later updated this definition by adding that

an assessment becomes formative when the information gathered is used to adjust

instruction to meet student learning needs. Though early definitions such as these are

frequently cited in the literature, the main characteristics of formative assessment have

evolved over the years to highlight and clarify key aspects that researchers deem

important to understanding and effectively implementing formative assessment into

practice (Chan et al., 2014; Chappuis, 2015; Clark, 2012b; Magno & Lizada, 2015).

One characteristic of formative assessment that gained attention was its use to

assess student understanding while learning is taking place. Checking student

understanding during a lesson was a rather new concept a couple of decades ago. In the

past, assessments were best known as a way to determine what a student knew at the end

of a learning cycle to establish their academic standing, often in the form of a letter grade

(Chappuis, 2015; Sadler, 1989). One of formative assessment’s key characteristics,

which set it apart from the well-known summative assessment, is that it includes

monitoring student learning during the instructional process (Chappuis & Stiggins, 2002).

Stiggins and DuFour (2009) expanded on this difference by clarifying the frequency in

which monitoring should take place in the classroom. They stated, “Formative classroom

assessments must provide an answer about where a student is located in his or her
27
learning, not once a year or every few weeks, but continuously while the learning is

happening” (p. 641). More specifically, Havnes et al. (2012) recommended that teachers

should use formative assessment every day to help students gain a complete

understanding of curricular concepts.

Another feature of formative assessment is that teachers may need to modify their

instruction to move the current level of student understanding to a deeper level of

understanding. Black and Wiliam (1998b) were first to insist that for assessment to be

formative, the results must be used to adjust teaching. Likewise, Tomlinson (1999)

declared that formative assessment “is today’s means of understanding how to modify

tomorrow’s instruction” (p. 10). Because of the focus on formative assessment’s role in

instruction, Black and Wiliam (2009) revised their previous definition of formative

assessment to include more emphasis on instructional adjustment:

Practice in a classroom is formative to the extent that evidence about student

achievement is elicited, interpreted, and used by teachers, learners, or their peers,

to make decisions about the next steps in instruction that are likely to be better, or

better founded, than the decisions they would have taken in the absence of the

evidence that was elicited. (p. 9)

More recently, Miranda and Hermann (2015) discussed the need for formative

assessment to be used to modify instruction, but they added that the adjustments could be

done “in real-time” and that teachers are better able to adapt their teaching when

formatives assessment is “regular and ongoing” (p. 83).


28
Feedback, another component commonly found in definitions of formative

assessment, is often interpreted and explained in different ways. Ramaprasad (1983)

defined feedback in terms of student performance. He stated, “Feedback is information

about the gap between the actual level and the reference level of a system parameter

which is used to alter the gap in some way” (p. 4). This definition, however, does not

explain how information about the gap is used. Sadler (1989) clarified this ambiguity by

explaining that feedback can provide information for both the teacher and the student to

make improvements—the teacher for decision-making and the students for self-

monitoring. The Council of Chief State School Officers (CCSSO) (2008), who worked

with researchers and educational leaders to develop a common definition of formative

assessment, also emphasized feedback. They defined formative assessment as “a process

used by teachers and students during instruction that provides feedback to adjust ongoing

teaching and learning to improve students’ achievement of intended instructional

outcomes” (CCSSO, 2008, p. 3). The most effective feedback is from student to teacher.

Formative assessment helps teachers determine what students know, what they

understand, what errors they are making, and what misconceptions they may have

(Hattie, 2012). Collecting feedback from students is not enough, however. Hudson et al.

(2013) and Van der Kliej et al. (2015) clarified that feedback is formative only when a

teacher uses it to make decisions to adjust their instruction and provide instructional

supports for closing a learning gap.

Feedback should also include information given from teachers to students. In the

formative assessment process, after students are asked to demonstrate their


29
understanding, the teacher should give corrective feedback with the intention to help

improve student learning (Hudesman et al., 2013). One way this teacher-student

exchange can happen is after students have completed a formative assessment task and

the teacher provides the whole group with the correct answers (Magno & Lizada, 2015).

Another way for the teacher to provide corrective feedback is while students are actively

working on a formative assessment task (Clark, 2012a). During the task, which can be

written or verbal, teachers “specifically point out what needs to be checked again,

improved, revised, changed, or reworked” (Magno & Lizada, 2015, p. 27). Not only will

teacher interactions during or after a formative task provide students with corrective

feedback, but prompt communication will also allow students to understand where they

stand in relation to the learning goals (Clark, 2012b). To address areas where

improvement is needed, feedback to students should be clear and given in a timely

manner to assist them in progressing their learning toward established curriculum goals

(Mandinach, 2012). Similarly, Chan et al. (2014) recommended that feedback be

immediate, direct, and delivered to students in a variety of ways. Immediate feedback

has been found to be especially important for struggling learners as it focuses their

learning (Chan et al., 2014).

A final, but equally important, characteristic of formative assessment is that its

use is viewed as a process. Black and Wiliam (1998b) first described formative

assessment as activities, and Chappuis and Stiggins (2002) referred to formative

assessment as instruments. The CCSSO (2008), however, defined formative assessment

as a process rather than a specific instructional task, tool, or test used to gather
30
information in the classroom. They also acknowledged that many different types of

formative assessment strategies can be used during the process to inform instructional

decisions. Popham (2014) explained that formative assessment is thought of as a process

that begins with checking for student understanding. The teachers must then continue to

the next step by deciding, based on formative feedback from students, whether or not to

make adjustments to their instruction to help learning progress, and if so, what

adjustments should be made. Heritage (2010) cautioned that if formative assessment is

only thought of as a test or instrument and not a process, the benefits of the instructional

practice for teaching and learning might be lost. She warned, “This distinction is critical,

not only for understanding how formative assessment functions, but also for realizing its

promise for our students and our society” (Heritage, 2010, p. 1).

Popham (2014) defined formative assessment as a planned process; however,

other researchers agree that it can be either planned or unplanned (Antoniou & James,

2014; Havnes et al., 2012). Chappuis (2015) stated that formative assessment could be

thought of in two ways: (a) formal formative assessment, which is planned in advance of

a lesson to gather information about student understanding during instruction; and (b)

informal formative assessment, which is not planned; the assessment is done on-the-fly or

on the spur-of-the-moment. Thinking of formative assessment as planned or unplanned

can allow teachers the freedom to use formative tasks whenever they see a need to check

for student understanding.

I considered the many definitions and characteristics of formative assessment

found in the literature when developing the formative assessment definition for this
31
study. I began with Black and Wiliam’s (1998b) definition, but gave greater clarification

by adding the following components: (a) the purpose of formative assessment is to

regularly gather information on student understanding during the learning process

(Stiggins & Dufour, 2009), (b) formative assessment helps to close a learning gap

between what students currently understand and the established learning goals (Clark,

2012b), (c) formative assessment is used to improve student achievement (CCSSO,

2008), and (d) formative assessment is a process that can be planned or unplanned

(Chappuis, 2015). Therefore, in this study formative assessment was defined as a process

in which a classroom task, planned or unplanned, is used regularly during the learning

process to provide feedback about students’ current levels of understanding so that

teaching and learning can be modified to address any gaps in understanding and improve

student achievement (Black & Wiliam, 1998b; CCSSO, 2008; Chappuis, 2015; Clark,

2012b; Stiggins & Dufour, 2009).

Formative Assessment Versus Summative Assessment

To truly understand what is meant by formative assessment, it is important to

understand summative assessment. Both main forms of assessment, formative and

summative, have contrasting but complementary roles in education. Unlike formative

assessment, which is used to determine student understanding during learning, summative

assessment is used to measure student understanding after learning has taken place

(Filsecker & Kerres, 2012; Roskos & Neuman, 2012). The main purpose of summative

assessment is to “judge student competency after an instructional phase is complete”

(Fisher & Frey, 2014a, p. 7). Summative assessment can take the form of unit tests,
32
standardized tests, district exams, grade-level tests, and final exams. Educators give

summative assessments less frequently than formative assessments, and they are usually

graded (Dixson & Worrell, 2016). Therefore, summative assessment is not beneficial for

determining gaps in student understanding or addressing misunderstandings during the

learning process. Summative assessments administered at the end of a learning cycle do

not provided teachers the timely feedback needed to adjust their instruction or to give

students the information they need to improve while they are learning (Conderman &

Hedin, 2012). In other words, summative assessment has “the disadvantage of

identifying problems when it is too late to resolve them” (Akpan, Notar, & Padgett, 2012,

p. 84).

The long-established testing culture and use of summative assessment in

education have contributed to problems with formative assessment implementation

(Antoniou & James, 2014; Birenbaum et al., 2015; Sach, 2015). Antoniou and James

(2014) stated that “although educational policy usually acknowledged the value and

significance of formative assessment, student assessment prioritises [sic] summative

assessment which is politically more powerful and influential” (p. 154). Therefore, even

if teachers understand the benefits of formative assessment, the focus on summative

assessment in schools could cause them to feel the need to spend their attention on

summative assessment. Teachers, in a study by Sach (2015), stated that they felt

“considerable pressure to meet government targets for attainment” and this pressure had

“the potential to inhibit the use of more formative assessment methods” (p. 329).

Likewise, Yan and Cheng (2015) discussed how the focus on summative assessment
33
could affect teacher implementation of formative assessment. They warned that teachers

might not use formative assessment in their teaching, even when they understand the

advantages of the practice, because they feel the pressure to meet the instructional

demands of high-stakes testing. The preceding statement may be one explanation as to

why only a small number of teachers are found to frequently use formative assessment

(Clark, 2012a; OECD, 2013).

Teachers may also be confused about the difference between formative and

summative assessment. The OECD (2013) discussed their findings of an international

study on formative assessment use in classrooms. They found that educators thought

formative assessment was “summative assessment done more often” or as a “practice for

final summative assessment” instead of a process used to assess student understanding

regularly and to inform teaching (p. 151). Studies such as this demonstrate how

educators often do not understand the true purpose of formative assessment as a

diagnostic tool to aid the teaching and learning process (OECD, 2013). Clark (2012a), in

his investigation about formative assessment use in the classroom, also discovered

confusion about the two types of assessment. He found that many teachers believe they

are using formative assessment when they are using summative assessment. As a result,

teachers often use formative assessment to give grades instead of using them to help

advance teaching and learning. Such incorrect use of formative assessment is concerning

considering its well-documented link to student achievement (Andersson & Palm, 2017;

Cornelius, 2014; Filsecker & Kerres, 2012; Hattie, 2012; Hudesman et al., 2013;

Madison-Harris & Muoneke, 2012).


34
Even though formative assessment has the potential to impact day-to-day teaching

and learning, summative assessment also has its role in education. Summative

assessment can be used to evaluate the effectiveness of instruction, school improvement

goals, programs, or curriculum alignment (Conderman & Hedin, 2012). For students and

parents, summative assessment may help provide the information needed to make

decisions about which schools to attend, which support programs to join, or which

courses needed to meet educational goals (Tridane, Belaaouad, Benmokhtar, Gourja, &

Radid, 2015). Summative assessment also, especially due to NCLB, pressures educators

to find ways to address achievement gaps and increase student achievement (Birenbaum

et al., 2015). Therefore, summative and formative assessment may work together to

influence student learning (Clark, 2015). As Clark (2015) suggested, what is needed is

“the integration of summative and formative assessment activities into a functional

system so that they work in concert to support and evaluate learning” (p. 93). Bennett

(2014) explained that schools need formative information for making important

instructional decisions regarding student learning in the classroom and summative

information to evaluate students academically and socially. Not all researchers agree that

there should be an equal balance. Spector et al. (2016) recommended more attention

should be given to formative assessment as opposed to summative because the former is

associated with improved learning. When teachers are encouraged to use formative

assessment, increased student achievement on summative assessment will follow (Yan &

Cheng, 2015).
35
Formative Assessment and Student Achievement

Black and Wiliam (1998b) conducted a comprehensive study whether or not

formative assessment use in the classroom led to higher student achievement. After a

meta-analysis of over 250 publications on formative assessment, they found the effect

sizes for student achievement were between 0.4 and 0.7. They concluded that student

academic achievement gains, as a result of formative assessment use, were “amongst the

largest ever reported for educational interventions" (p. 61). Studies included participants

from several countries in age groups from 5-year-olds to university undergraduates.

Students in the experimental groups, where teachers used formative assessment, had

“significantly higher scores in reading, mathematics, and science than the control group”

(Black & Wiliam, 1998a, p. 12). Furthermore, Black and Wiliam (1998b) found that

classroom formative assessment practices particularly helped young students from

disadvantaged backgrounds. Another finding showed that, when compared to all

students, frequent formative assessment use was especially beneficial for low-achieving

students.

Despite Black and Wiliam’s (1998b) widely publicized meta-analysis on

formative assessment and student achievement, their findings on formative assessment’s

effectiveness were questioned. A few researchers argued there were inconsistencies in

Black and Wiliam’s (1998b) work. Two studies, conducted by Dunn and Mulvenon

(2009) and Kingston and Nash (2012), cited flawed research designs such as different

interpretations and implementations of formative assessment, small sample sizes of some

studies, and extraneous variables. As a result, the researchers determined that the
36
influence of formative assessment on student achievement was insufficient. After

conducting a critical analysis of the studies Black and Wiliam (1998b) used for their

research, as well as other published materials on formative assessment in the decade that

followed, Dunn and Mulvenon (2009) concluded that research does support a connection

between formative assessment and student achievement. Even though they cited some

problems with methodologies and suggested more research was needed, they

acknowledged formative assessment as “an excellent means of improving student

performance, in particular the achievement of lower performing students” (p. 9).

Another group of researchers, Kingston and Nash (2012), conducted a meta-

analysis about the efficacy of formative assessment in grades K-12 and came to similar

conclusions as Dunn and Mulvenon (2009). After reviewing and applying their inclusion

criteria to over 300 studies, which left them finding only 13 acceptable to use, Kingston

and Nash (2012) determined the weighted mean effect size of formative assessment on

student achievement was 0.28. Even though their results were significantly lower than

Black and Wiliam’s (1998b) effect size, they recognized that formative assessment has

“great practical significance in today’s accountability climate” (Kingston & Nash, 2012,

p. 34). Even though both Dunn and Mulvenon (2009) and Kingston and Nash (2012)

concluded that the degree of influence formative assessment had on student achievement

was debatable, they did acknowledge, however, that formative assessment had positive

influences on student achievement. Many researchers over the past decades have come to

similar conclusions about the influence of formative assessment on learning (Andersson


37
& Palm, 2017; Baird et al., 2014; Cornelius, 2014; Filsecker & Kerres, 2012; Hudesman

et al., 2013; Madison-Harris & Muoneke, 2012; Yin et al., 2013).

Some researchers conducted studies to determine which instructional practices

yielded the highest effect size on student achievement. Hattie (2012) completed over 800

meta-analyses of 50,000 research articles related to student achievement to establish

which instructional strategies produced the highest influence on learning. He found two

practices that are part of the formative assessment process to be among the highest effect

size of the strategies studied. Teacher questioning (a way to check for understanding)

had an effect size of 0.46 and student-to-teacher feedback (data teachers collected about

student understanding to inform their teacher) had an effect size of 0.73 (Hattie, 2012).

Furthermore, Hattie (2012) discussed how immediate feedback to teachers and students

during formative assessment could yield substantial results. In fact, he stated that when

feedback is regularly a part of the formative assessment process, “there can be a 70 to 80

percent increase in the speed of student learning, even when this learning is measured by

standardized tests” (Hattie, 2012, p. 128). The significance of this finding makes a

compelling argument for using formative assessment to help support overall student

achievement in schools, especially ones struggling with low standardized assessment

scores.

Other researchers have conducted studies on the effect of formative assessment on

student achievement in a particular content area. For example, Mehmood et al. (2012)

conducted an experimental study on secondary school English students using a

pretest/posttest model. Statistical analysis of the pretest in the control and experimental
38
groups showed no significant difference. The experimental group, who was taught and

assessed by a teacher who used formative assessment practices, had a mean score of

26.86 in their posttest results as compared to the control group who exhibited a mean

score of 14.83, a difference of 12.03. Mehmood et al. (2012) concluded that formative

assessment played a significant role in student achievement for the group in this study. In

a similar study, Ali and Iqbal (2013) investigated how classroom formative assessment

use affected student achievement in science. Students in the experimental group were

taught six chapters in science by a teacher who used formative assessment regularly

throughout the lessons. The control group was taught with no formative assessment

practices, and they only took a summative test at the end of the chapters. The results

showed that the science students who were taught using formative assessment had higher

achievement levels than the control group.

Li (2016) demonstrated a similar result of the effect of formative assessment on

student achievement on a reading standardized test. Li (2016) studied the relationship

between formative assessment and student reading achievement on the 2009 PISA test,

an international standardized assessment. Over 5,000 15-year-old students from 165

schools in the U.S. participated. Li (2016) analyzed data from student questionnaire

items about the frequency of their teachers’ formative assessment practices, teacher-

student relationships, attitudes toward reading, and student scores on the reading portion

of the PISA. The results showed that “formative assessment is significantly related to

reading achievement both directly and indirectly” and “formative assessment and reading

achievement is significantly stronger for Black students than for White students” (Li,
39
2016, pp. 19-20). These findings show support for using formative assessment to not

only improve reading scores but to also help close the ethnic achievement gap in reading.

With a substantial focus on accountability in education, there is a need for

educators to address student achievement and to implement instructional practices that

can help lower achievement gaps and raise overall scores. Research about formative

assessment use, linked to both increased classroom learning and standardized test results,

demonstrate how implementing formative assessment can be beneficial for schools with

achievement problems. Conderman and Hedin (2012) found that student achievement on

high-stakes tests “is directly related to high-quality classroom instruction, which requires

teachers to gather continuous formative student assessment data and adjust instruction

accordingly” (p. 168). Curry et al. (2016) conducted a study in a district that supported

teachers collecting formative assessment data as a strategy to increase student

achievement on standardized assessments. Results showed a moderate increase in

student reading scores on the state assessments. Likewise, Hattie (2012) found that

student achievement on standardized tests improved with increased teacher formative

assessment use. Research demonstrates a need for school leaders to be informed about

teachers’ formative assessment practices to improve student achievement.

Checking for Understanding

One important component of the formative assessment process is the need for

teachers to check for student understanding. Student understanding must be monitored to

determine if students are learning the information taught to them. In other words, if

teachers do not know the current level of student understanding, then it is difficult to
40
address any problems that might be affecting student learning. When checking for

student understanding, the teacher uses formative tasks “to determine what the students

know and do not know, what they can do and cannot do, and their misconceptions, and

their confusion” (Magno & Lizada, 2015, p. 24). Teachers should ask themselves,

“Where are students relative to my immediate learning goals? Who is and who is not

understanding the lesson? What stands in their way of accomplishing the goals? Have

students progressed as I expected? Has their thinking advanced as I had planned? If not,

what misconceptions or learning obstacles do they evidence?” (Herman, 2013, p. 4).

Checking for understanding is important to the formative assessment process because it

allows teachers to give students feedback on their learning and to plan instruction based

on students' errors and misconceptions (Fisher & Frey, 2014a).

Formative assessment strategies to check for student understanding. A

formative assessment strategy is an instrument or activity that “provides information of

sufficient detail to pinpoint specific problems, such as misunderstandings, so that

teachers can make good decisions about what actions to take, and with whom”

(Chappuis, 2015, p. 6). Formative assessment strategies are used to collect information

about student understanding by giving students an opportunity to demonstrate their

thinking (Kang et al., 2014). There are many formative assessment strategies that

teachers can use with their students to check for student understanding. In fact, Trumbull

and Lash (2013) purported that any instructional activity can be used for a formative

purpose if the activity reveals information about student understanding and can be used to

help progress learning.


41
According to Conderman and Hedin (2012) and Magno and Lizada (2015),

formative assessment strategies can be conducted before, during, or after instruction.

Before instruction, teachers may want to determine students’ current understanding of

upcoming curricular concepts (also called assessing prior knowledge) (Clark, 2012b).

Conderman and Hedin (2012) suggested several strategies that teachers can use to

determine what students already know about a topic: class discussions, pretests, warm-

ups, admit slips, anticipation guides, or the first two columns of a KWL chart (K = what I

know and W = what I want to learn). Keeley (2013) also suggested using probes to

uncover student thoughts, especially incorrect ones, about a concept before instruction.

Teachers can use information gathered from any of these strategies, as well as many

others, to make instructional decisions about the amount of time and support to spend on

upcoming learning goals. The information elicited from formative assessment strategies

given before instruction can also help teachers learn about any prior student

misconceptions (Chappuis, 2015; Hattie, 2012; Herman, 2013). Recognizing student

misconceptions can give teachers opportunities to pre-plan questions and check for

student understanding during crucial learning points in the lesson (Chappuis, 2015).

Formative assessment strategies can also be used during instruction to determine

if students currently understand the learning goals and to make immediate instructional

decisions based on the responses (Conderman & Hedin, 2012; Magno & Lizada, 2015).

Teachers can stop at different points in the lesson to check for student understanding,

which allows the teacher to closely monitor progress (Fisher & Frey, 2014a). Teachers

can give students formative assessment tasks during instruction such as writing answers
42
on dry-erase boards, holding up response cards, responding in unison, writing minute

papers, hand signaling, participating in discussions, using think-pair-share, and engaging

with personal response systems (Akpan et al., 2012; Conderman & Hedin, 2012; Helf,

2015; Nagro, Hooks, Fraser, & Cornelius, 2016; Stefl-Mabry, 2018). The teacher’s goal

should be to use formative assessment strategies to assess all students so they can

accurately determine the current level of understanding of the class and make informed

instructional decisions (Fisher & Frey, 2014a; Wiliam, 2013). Based on student

feedback, the teacher can then decide to continue with the lesson or to stop and reteach

information using a different approach or instructional strategy (Bellert, 2015).

Teachers can also implement formative assessment strategies after instruction.

During this time, teachers can determine whether or not students have met the learning

goals (Wood et al., 2016). Post-instruction formative assessment strategies can include

exit slips, the last column of the KWL chart (L = what I learned), 3-2-1 summaries,

multiple-choice questions, one sentence summaries, concept maps, and self-assessments

(Conderman & Hedin, 2012; Sass-Henke, 2013; Wiliam, 2014). Gathering information

about student learning at the end of a lesson allows teachers to adjust future instruction to

address student errors, misunderstandings, flaws in reasoning, or misconceptions they

find in closing activities (Chappuis, 2015). In a follow-up lesson, for example, teachers

could re-explain concepts, reteach a lesson using a different instructional strategy, allow

for more practice to reinforce learning, or use the results of formative assessment to place

students into groups for differentiated learning (Helf, 2015; Mehmood et al., 2012).
43
The formative assessment strategies that teachers use can be either planned or

unplanned. Chappuis (2015) provided the term “informal” formative assessment for any

assessment that is not planned in advance and “formal” formative assessment for any

assessment that is planned. Response cards, signaling, a partner share, or one sentence

summaries are a few examples of formative assessment strategies that are often

unplanned. A teacher can quickly implement one of these strategies any time he wants to

check for student understanding. These informal formative assessment tasks, often called

on-the-fly, are beneficial “when teachable moments unexpectedly arise in the classroom”

(Yin et al., 2013, p. 534). On the other hand, answering warm-up questions, taking

multiple-choice quizzes, and filling out anticipatory guides are examples of planned

formative assessment tasks that teachers can give students. These tasks should be

prepared in advance of the lesson. Questioning can be a quick and easy informal

formative assessment strategy for teachers to use; however, questions should be designed

prior to a lesson to prompt deeper and more informational responses (Jiang, 2014; Smart

& Marshall, 2013).

Formative assessment tasks given to students should have “low or no stakes

attached to [them]” (Nagro et al., 2016, p. 244). In other words, teachers should not use

formative assessments to academically punish or reward students for how well they

understand the curricular concepts during the learning process. Instead, they should be

used to inform teachers’ instructional decisions and to give students feedback that can

progress learning. Chappuis (2015) warned that grading formative assessment tasks

could negatively affect students and hinder the learning process. He stated that if
44
teachers assign grades to formative assessment tasks and students do not do well, students

may feel they are not good at something or are not smart; they may even give up.

Instead, the strength of formative assessment is that the process does not reveal to

students that they are not good at something but that they “aren’t good at it . . . yet”

(Chappuis, 2015, p. 26). As students become familiar with the formative assessment

process, they can learn that feedback from formative assessment tasks allow them to take

ownership of their learning so that they can be academically successful. The idea that

learning is a result of effort, not a lack of ability, can be especially beneficial to low-

achieving students or to students who need more time to process new concepts

(Mehmood et al., 2012).

Although the formative assessment strategies discussed in this section were

categorized into three implementation times—before, during, and after instruction—

many formative assessment strategies can be used at various times throughout the lesson.

However, for any instructional strategy to be considered formative, it must be used by the

teacher to inform instruction, not merely given as a task (Black & Wiliam, 1998a; Black

& Wiliam, 2009; Chappuis, 2015; Duckor, 2014; Johnson et al., 2013; Miranda &

Hermann, 2015; Stiggins, 2009).

Assessing all students. Even though checking for student understanding is

central to the formative assessment process, Fisher and Frey (2014a) have found that

teachers often do not conduct these checks effectively. One problem with

implementation is that teachers frequently do not use formative assessment to elicit

feedback about current levels of understanding from more than a few students at a time
45
(Fisher & Frey, 2014a; Helf, 2015). Formative questioning is a common way for

teachers to check for student understanding; but often when teachers ask questions to the

class, only a few students raise their hands (Duckor, 2014). Consequently, only having a

few students participate during formative assessment does not provide enough

information about the class’ current understanding and is "simply not sufficient in

determining whether or not students 'get it' " (Fisher & Frey, 2014a, p. 5). As Wiliam

(2014) pointed out, a problem also exists when teachers randomly call on students who

are not raising their hands. Teachers are still only assessing the understanding of a few

students. Instructional decisions based on only a few students’ responses are not likely to

yield success (Wiliam, 2014). There is a lack of feedback on which to make instructional

decisions when teachers only question a few students. Therefore, teachers must give

opportunities and encourage all students to express their understanding; not just a select

few. When formative feedback from all students is elicited, the teacher avoids being "out

of touch with the understanding of most of the class" (Black & Wiliam, 1998a, p. 6).

Once teachers begin to involve every student in formative assessment tasks, they can

develop a better picture of student understanding and can use the feedback to properly

adjust their instruction to address gaps in learning (Chan et al., 2014).

Assessing the understanding of more students than just the few who raise their

hands is critical in urban schools. Johnson et al. (2013) found in their study of high-

performing urban schools that teachers gave formative tasks to check for understanding

from all students so they could ensure that every student was making progress toward

learning goals. For example, in a typical classroom, the teacher may call on students one
46
at a time to give an answer, leaving the other students unengaged and the teacher with

little information about the level of student understanding in class. Consequently, if the

few students who respond to a question do so correctly, then the teacher may falsely

conclude that all students understand and move on with instruction (Duckor, 2014;

Wiliam, 2014). Johnson et al. (2013), however, found that in high-performing schools,

teachers used formative assessment with all their students. For example, a teacher asked

all students to write a response on a whiteboard, giving every student an opportunity to

respond and be engaged. The teacher could see all the answers and quickly assess the

level of understanding in the class. Johnson et al.’s (2013) research provided many other

examples of formative assessment practices used to determine whole class understanding

in urban settings: (a) having all students respond in unison and listening to those with

different answers, (b) calling on students individually to gain more information about

their thinking, (c) having students write short responses and circulating around the room

to observe any errors in thinking, and (d) having students discuss concepts in groups

while the teacher walks around and monitors conversations for understanding.

Johnson et al.’s (2013) work demonstrated that teachers who used formative

assessment practices in high-performing schools rarely asked for answers from only a

few students. Doing so limits student involvement, and the disengaged students often fall

behind (Wiliam, 2014). Instead, teachers in these schools wanted to give all students

equal opportunities to respond to formative questions; this inclusive practice allowed for

better feedback about student understanding (Johnson et al., 2013). When teachers

routinely check the understanding of the whole class through the use of formative
47
assessment tasks, then student misunderstandings can surface (Fisher & Frey, 2014a).

Teachers can then adjust their instruction to address uncovered misunderstandings, close

learning gaps, and help students meet learning goals. Creating opportunities for all

students to respond to formative tasks that check for understanding, therefore, is an

important practice teachers can implement to help improve student achievement (Nagro

et al., 2016).

Also noteworthy is that there were clear expectations in schools where teachers

elicited responses from all students during formative assessment. The students knew that

“each day, in each class, they [would] be called upon to participate, engage, and

demonstrate their learning” (Johnson et al., 2013, p. 41). Because of these expectations,

students began to understand that the classroom was a place where errors and

misunderstandings meant growing as a learner (Wiliam, 2012). Students were willing to

share their thinking with others and knew that incorrect answers were a part of the

learning process (Black & Wiliam, 1998a). When students see the relevance of

demonstrating their understanding in class, participation during formative assessment

tasks ultimately becomes an avenue for them to take ownership of their learning, an

important principle of formative assessment and its constructivist approach.

Appropriate questioning. Questioning is a popular formative assessment

strategy teachers use to check for student understanding, but it should be implemented

appropriately to be beneficial to the formative assessment process. Teachers do not

always use questioning in a formative way. To be considered part of the formative

assessment process, teachers must use questions that check for student understanding
48
and, based on the responses to the questions, make instructional decisions to improve

learning (Jiang, 2014). Formative questioning refers to the process of asking questions to

check for student understanding and evaluating responses to adjust instruction (Jiang,

2014).

Several important aspects of formative questioning emerged from the research,

among these are using wait time, being purposeful, and planning. Hill (2016)

recommended that teachers allow for time between asking students a question and

prompting them for a response (wait time) to better determine the extent of student

understanding from their responses. Duckor (2014) added that wait time was especially

important in mixed-ability classrooms where there might be a need for longer mental

processing. By giving students extra time to think, teachers can involve more students in

the formative assessment process. As a result, teachers can gain an accurate picture of

the current level of understanding in the class. Unfortunately, Hill (2016) found that even

though research suggests teachers use longer wait times, there is wide use of short wait

times in practice.

Questions teachers ask to check for student understanding should be “purposeful

and strategic” (Johnson et al., 2013, p. 38). The formative questions should be focused

on learning goals and should consider possible student misconceptions and

misunderstandings (Duckor, 2014; Wylie & Lyon, 2015). Duckor (2014) stated that an

appropriate formative question “sizes up the context for learning, has a purpose related to

the lesson and unit plan, and, ideally, is related to larger essential questions in the

discipline” (p. 29). Teachers should also plan some formative questions in advance as
49
they consider learning goals, common student misconceptions, and the knowledge and

skills students bring with them to class (Wiliam, 2014). When teachers take the time to

plan formative questions in advance, the quality of their formative questioning increases

(Smart & Marshall, 2013); meaning teachers can elicit more developed student responses.

Gathering detailed information about student understanding during formative questioning

may allow teachers to make more informed instructional decisions that will help support

learning.

The extent of how aware teachers are about their students’ current levels of

understanding depends on the questions they pose (Smart & Marshall, 2013). Therefore,

the types of formative questions teachers ask students matter. Teachers should not solely

focus on formative questions with a simple right answer where a deeper level of

understanding is left unchecked (Duckor, 2014). Instead, formative questions should

promote thinking and uncover students’ conceptual understanding. Staunton and Dann

(2016), however, found appropriate formative questioning to be a challenge for many

teachers. They uncovered that teachers often ask low-level factual or recall questions

rather than high-level challenging questions that give them better insight into student

thinking. Several studies concluded that teachers lacked skills in appropriate formative

questioning that elicited a deeper conceptual understanding (Heitink et al., 2016;

Marshall & Smart, 2013; Yin et al., 2013). Because the intention of using formative

assessment is to increase student understanding, only asking low-level formative

questions will not elicit the feedback necessary to advance student learning to the extent

that it could (Bulunuz, Bulunuz, & Peker, 2014; Duckor & Holmberg, 2017). Heritage
50
and Heritage (2013) claimed, “When working within the ZPD, part of the teacher’s task

is to resist the temptation to foreclose the child’s own conceptual work through the use of

known-answer questioning, overly transparent directive questioning, or even providing

explicit solutions” (p. 178). Therefore, appropriate and thoughtful questioning is

essential when used during the formative assessment process to help students meet

learning goals.

Jiang (2014) explored how teachers used questions to uncover student

understanding. He divided questions into two categories: (a) convergent—questions that

were primarily used for factual recall (low-level thinking) and (b) divergent—open-ended

questions that encouraged a variety of responses (high-level thinking). Results showed

that teachers asked significantly more convergent questions than divergent questions.

Even though convergent questioning is powerful when it is used to progress student

learning, Jiang (2014) recommended that teachers should aim to increase their divergent

questioning to elicit better formative feedback about student understanding. Black and

Wiliam (1998a) also proposed that teachers use more divergent, or open-ended, questions

to make better instructional decisions. Jiang (2014) agreed with this assertion and stated

that divergent questions are “capable of eliciting richer learner information” so that

teachers are “better able to gauge student needs and make pedagogical decisions

accordingly” (p. 297). Likewise, Ateh (2015) declared that it was essential for teachers

to gather evidence of students’ deeper and conceptual understanding so they could

properly adjust instruction to influence student learning; convergent questioning alone

did not provide the information needed to make sound instructional decisions.
51
Similar results were found in Kira, Komba, Kafanabo, and Tilya’s (2013) study of

a teacher’s ability to use questioning to measure student understanding and promote

learning. Kira et al.’s (2013) research, like Jiang (2014), showed that most teachers

primarily used convergent questioning to check for student understanding. In fact, 80%

of the teachers observed experienced problems balancing convergent and divergent

questions (Kira et al., 2013). In addition, teachers did not ask questions frequently nor

did they try to elicit responses from all students; they systematically called on the few

who raised their hands. This observation confirms the earlier affirmation by Fisher and

Frey (2014a) stating that teacher formative questioning is often ineffective and many

students do not participate when asked questions. If teachers only receive feedback from

a select number of students about their understanding, then responses from these few

active students may cause teachers to “believe that the same responses would be given by

the rest of the students if they were given opportunities to do so” (Kira et al., 2013, p.

73). The assumption that the understanding of a few is representative of all students

“leads to a false sense of feedback” (Duckor, 2014, p. 31). Insufficient feedback about

students’ current levels of understanding because of ineffective formative questioning

may result in teachers not addressing misunderstandings needed to help students meet the

learning goals. Students not meeting learning goals can negatively affect achievement

levels.

How often to check for student understanding. Implementing formative

assessment on a consistent basis is an important characteristic of formative assessment.

Miranda and Hermann (2015) found from their research that teachers were better able to
52
adjust their instruction and to help students gain a clearer understanding of curricular

concepts when formative assessment was used regularly in the classroom. Constant

checking for student understanding is a crucial part of the formative assessment process.

Johnson et al. (2013), in their study of high-performing urban schools, determined that

effective teachers check student understanding “continually and persistently” after new

curricular concepts are presented “to determine if students heard, processed, and

internalized the information accurately” (p. 38). More specifically, Havnes et al. (2012)

recommended that formative assessment should take place every day, whether it is

planned or unplanned. Curry et al. (2016) elaborated on the previous recommendation by

stating that formative assessment data from checking for understanding should be

collected daily to allow teachers to gain a more detailed picture of their students’ levels

of understanding and to determine what, if any, instructional adjustments should be

made.

Popham’s (2013) research offered insight into how often teachers decided to

implement formative assessment tasks to check for student understanding. He found

several factors that affected teachers checking for understanding: (a) the amount of time

to prepare the formative assessment task, (b) the amount of time to administer the

formative assessment task, (c) the student’s level of background knowledge, (d) the

complexity of the subject matter, (e) the teacher’s level of experience teaching the subject

matter, and (f) the teacher’s understanding of and commitment to using formative

assessment. Without understanding teachers’ perceptions of formative assessment, such

as factors hindering its use in the classroom, school leaders may not have proper
53
instructional supports in place for consistent implementation of this research-based

practice.

Adjusting Instruction

An essential component of the formative assessment process is that the

information gathered from the formative tasks is used to adjust instruction (Ateh, 2015;

Duckor, 2014). In fact, Black and Wiliam (1998b) were the first to insist that for

assessment to be considered formative, the results must be used to adjust teaching.

Therefore, during this phase of the formative assessment process, teachers must now ask

themselves a different set of questions: From the information I collected about student

understanding, is there a learning gap that should be addressed? What adjustments should

I make to my instruction? What student misunderstandings do I need to address? What

instructional activities will help me bridge the gap between a student’s current level of

understanding and where they need to be? (Chappuis, 2015; Herman, 2013).

Using information from formative assessment. After collecting information

about student understanding, the next step in the formative assessment process is for

teachers to analyze the data so they can adjust their instruction to address gaps in student

learning (Konrad, 2014). Wylie and Lyon (2015) revealed that for teachers, the most

challenging part of the formative assessment process is the ability to use the evidence

collected about student understanding to inform their instruction. Miranda and Hermann

(2015) expanded on the struggle for teachers to connect formative assessment feedback

and instruction by stating, “In our 17 years of classroom experience in teaching and

providing professional development programs to both pre-service and in-service teachers,


54
we have found that many teachers often have questions about how to effectively use

formative assessment to modify instruction” (p. 80).

Similarly, findings of Wood et al. (2016) also indicated that teachers do not

always know what to do after they have collected information from formative

assessment. Studies have revealed that most teachers had not been trained on how to use

feedback collected about student understanding to inform their instructional planning

(Curry et al., 2016; Dunn et al., 2012; Mandinach & Gummer, 2013). Lack of training

could contribute to teachers not using formative assessment data to make necessary

instructional changes to meet student learning needs. The inability to effectively use

formative assessment data is noteworthy because, as Ruiz-Primo and Li (2013) asserted,

“Knowing how to use such information to make instructional decisions is critical” to the

formative assessment process (p. 173). In other words, student learning may not progress

if formative feedback is only collected but not acted upon.

Making instructional decisions. Once teachers have analyzed and interpreted

information collected about student understanding from formative assessment tasks, they

can then determine the appropriate next steps for instruction. Feedback from student

responses collected during formative assessment tasks is meant to supply teachers with

the information they need to make sound instructional adjustments that support student

learning needs. Trumbull and Lash (2013), however, identified that making instructional

adjustments was another area of the formative assessment process where teachers often

struggle. They found that teachers often did not know what to do with the data they

collected from formative assessment tasks. Even though teachers may have gathered and
55
analyzed data about their students’ understanding, they often were “not able to identify,

target, and carry out specific instructional steps to close the learning gaps” (Trumbull &

Lash, 2013, p. 13).

The time that a formative assessment is given during a lesson can affect how

teachers adjust their instruction. For example, if information from a formative

assessment task given before instruction shows that students do not fully understand all

the concepts of the past lesson, then the teacher can choose an activity to review (Magno

& Lizada, 2015). Formative assessment tasks may also be given at the beginning of class

to determine if students have prior knowledge of a concept needed for the upcoming

lesson. If the data showed that students already knew the concept, then the teacher could

instruct at a higher level or proceed to the next concept; however, if the data showed

students did not have prior knowledge, then the teacher could spend more time on the

concept or slow the pace (Magno & Lizada, 2015).

During instruction, formative assessment data about student understanding can

help teachers decide how to continue with the lesson. They may change the pacing,

reteach a concept, start a discussion about misconceptions, or implement an activity to

help students practice concepts they are struggling to learn (Magno & Lizada, 2015;

Johnson et al., 2013). Teachers may also use guided instruction. Guided instruction,

according to Fisher and Frey (2014a), is “the strategic use of questions, prompts, or cues

designed to facilitate student thinking” (p. 13). These actions can help give the

scaffolding students need to move from their current level of understanding to the next.

Formative assessment data at the end of instruction can show if students understood the
56
learning goals of the lesson. Teachers can use the information to identify concepts with

which students are struggling and plan future activities accordingly (Conderman &

Hedin, 2012; Johnson et al., 2013). Whether formative assessment data are collected and

interpreted before, during, or after a lesson, teachers should ask themselves a question to

help determine how to adjust their instruction, “Do their [students’] responses reveal

incomplete understanding, flawed reasoning, or misconceptions?” (Chappuis, 2015, p.

13). The answer to this question can help teachers make more accurate and effective

instructional decisions at any time during the lesson.

By examining the learning goals and data collected about student understanding

from formative assessment tasks, teachers can thoughtfully determine what instructional

adjustments should be made to support student learning (Wood et al., 2016). On some

occasions, teachers may adjust their instruction with the whole class by reteaching or

choosing an alternate instructional approach (Bellert, 2015). On other occasions, teachers

many want to differentiate instruction to better meet individual student learning needs

(Tomlinson, 2014). Because the formative assessment process allows teachers to

determine which students are meeting learning goals and which need more support,

teachers can choose to match students with instructional activities to help bridge learning

gaps (McGlynn & Kelly, 2017). Sass-Henke (2013) suggested two types of instructional

adjustments for this purpose: remediation and enrichment. Remediation is any corrective

activity given to students needing extra practice (Sass-Henke, 2013). Examples include

reteaching, learning stations, correctives, peer tutoring, or technology tools. Teachers can

deliver these remediation activities to the whole class or just to individual students
57
depending on the formative assessment results. Enrichment activities, on the other hand,

can be given to students who understand the curricular concepts and meet the learning

goals (Sass-Henke, 2013). These activities extend student knowledge by providing them

with more in-depth learning on the current topic. Therefore, if teachers are continuously

using formative assessment data to adapt instruction, it will require them to be flexible in

their lesson planning, as “the weekly schedule can change on a moment’s notice if an

understanding check reveals a need for reteaching” (Sass-Henke, 2013, p. 45). Likewise,

Tomlinson (2014) expounded, “It is wasteful of time, resources, and learner potential not

to make instructional plans based on that [students’] understanding. Assessment of each

learning experience informs plans for the next learning experience. Such an assessment

process never ends” (p. 14). In other words, the formative assessment process is a cycle

in which teachers must make instructional decisions based on student data from formative

assessment tasks, adjust their instruction accordingly, and then reassess students to

determine their new level of understanding.

Implications

Results of this study could have positive implications on formative assessment

use and practices that affect student understanding of state and district learning goals and,

accordingly, have the potential to positively affect student achievement. School leaders,

with the information from this study, may be better able to support the consistent use of

formative assessment practices that help teachers check for student understanding and

adjust their instruction. The additional support may take the form of professional

development aimed to (a) introduce the purpose, role, and benefits of formative
58
assessment; (b) enhance formative assessment practices that data showed need

strengthening; and (c) demonstrate a variety of formative assessment strategies that can

help teachers gather information on student understanding. Other strategies to support

consistent teacher formative assessment use could be provided through coaching,

dedicated time for discussions about formative assessment (such as in professional

learning communities), and allocation of resources to assist formative assessment

implementation. Support would be especially beneficial for teachers who have never

participated in formative assessment training, which research has shown to be true for

most teachers (Curry et al., 2016; Dunn et al., 2012; Mandinach & Gummer, 2013).

This study could also help stakeholders provide the needed supports to further

teacher formative assessment use not only at the local high school but also at the district

or state level. Consequently, by supporting consistent implementation of formative

assessment in classrooms to help students meet learning goals, school leaders may see

results such as better grades, more students passing classes, more students with enough

credits to move to the next grade level, increased graduation rates, and higher

standardized test scores.

Another outcome of this study may be school leaders’ realizations about the

importance of regularly collecting information on teacher formative assessment

implementation in their schools so that they can support consistent use of this practice. A

possible project could be to develop a tool and corresponding plan for school leaders to

gather information on how teachers use formative assessment to check for understanding

and to adjust instruction. A tool to gather school formative assessment information could
59
take the form of teacher surveys, observation protocols, or interviews. The tool, along

with an implementation plan for the school year, could help school leaders make

informed decisions regarding any needed adjustments to the established formative

assessment supports. In addition, the formative assessment plan may also be used as an

evaluation tool to determine if the current supports school leaders have provided teachers

are effective and beneficial. Further studies may include a quantitative analysis to

determine if there is an association between the frequency of teacher formative

assessment use and student achievement at Hammond.

Summary

A variety of student achievement issues at Hammond spurred school leaders to

recommend that teachers implement formative assessment to increase the number of

students meeting learning goals. Despite several years of encouraging the use of

formative assessment in classrooms, the local data at Hammond High School revealed a

lack of consistent use of this practice. This qualitative case study explored how teachers

used formative assessment to check for understanding and to adjust instruction—the two

components of the formative assessment process administration had recommended

teachers to implement to support student learning. A review of the literature provided

evidence that formative assessment use by classroom teachers can result in increased

student understanding of curricular concepts and, correspondingly, increased student

achievement (Andersson & Palm, 2017; Baird et al., 2014; Cornelius, 2014; Filsecker &

Kerres, 2012; Hudesman et al., 2013; Madison-Harris & Muoneke, 2012; Yin et al.,

2013). Therefore, student achievement issues at Hammond may be improved when


60
school leaders have information about teachers’ formative assessment use and can make

informed decisions regarding any needed instructional supports.

Section 2 describes how I conducted this qualitative case study regarding

Hammond High School teachers’ use of formative assessment to check for student

understanding and to adjust instruction. The section contains a description of the study

design and approach, as well as a justification for the design based on the local problem.

I describe the criteria for the selection of participants and the data collection instruments,

which are in the form of observations, interviews, and teacher logs. Also found in this

section are the processes I used to protect participants’ rights and to ensure the integrity

of the information collected. In the final sections, I explain how the data was analyzed,

discuss the findings resulting from the analysis, and review the study limitations.
61
Section 2: The Methodology

Research Design and Approach

The purpose of this qualitative study was to gain better understanding of how

teachers at Hammond use formative assessment to check for student understanding and to

adjust instruction to help students meet learning goals. Rich, thick descriptions of

participants’ perceptions and use of formative assessment, as they relate to student

understanding of concepts, were needed so that school leaders could make informed

decisions regarding formative assessment support. Therefore, I chose a qualitative

approach that would, according to Yin (2016), yield the level of detailed data needed to

gain a deep understanding of how teachers use formative assessment.

In qualitative studies, the researcher is the key data collection instrument

(Creswell, 2013; Merriam & Tisdell, 2016; Yin, 2016). In this study, I interacted directly

with participants at Hammond to gather information about their formative assessment

practices and perceptions. Because of the “complexity of the setting and the diversity of

its participants,” Yin (2016) recommended that qualitative research include “collecting,

integrating, and presenting data from a variety of sources” (p. 11). In light of this

recommendation, I collected formative assessment data from participants by observing

classrooms, conducting interviews, and examining teacher logs. Having multiple data

points allowed me to review and organize information “into categories or themes that cut

across all of the data sources” (Creswell, 2013, p. 48). The development of categories

and themes is part of the inductive nature of qualitative studies that researchers use to

make sense of the data and to develop a deeper understanding of the problem. Yin
62
(2016) described the inductive approach to qualitative research as one where the data

drives the development of broader concepts. The broader concepts in this study emerged

from the data analysis and identification of themes I used to make meaning of the

multiple sources of data collected from the participants. Meaning-making is the central

focus of qualitative research (Yin, 2016). By working to understand the practice and

perceptions of teachers’ formative assessment use, I sought to provide valuable

information that Hammond school leaders need to make informed decisions to support

teachers’ consistent implementation of formative assessment.

There are several different approaches, including case study, that can drive the

methodology of a qualitative study (Yin, 2014). According to Yin (2014), a case study is

recommended when a “how” question is asked and the research involves a set of events

that the investigator, at the location of study, does not manipulate or control. Merriam

and Tisdell (2016) added that a case study includes an “in-depth description and analysis

of a bounded system” (p. 38). A bounded system includes a particular group of people in

a specific setting at a certain point in time (Creswell, 2013). A case study, therefore, was

a logical choice as the research methodology because I conducted this study at Hammond

High School, a bounded system, and I investigated in depth how teachers used formative

assessment within their natural setting.

Qualitative case studies, which seek a deep understanding of participants through

observations and interviews, give more insight into a phenomenon than a quantitative

study. Merriam and Tisdell (2016) stated, “A central characteristic of all qualitative

research is that individuals construct reality in interaction with their social worlds . . .
63
[and] the researcher is interested in understanding the meaning a phenomenon has for

those involved” (p. 24). In this case, the phenomenon was teacher formative assessment

use. Whereas qualitative research captures information about individual participants’

actions and perspectives, quantitative research focuses on the collection of numeric data

that can be used to statistically represent a population (Yin, 2016). I developed this study

to understand how teachers implemented formative assessment practices, not to collect

measurable data. Furthermore, quantitative research is often experimental in nature and

usually involves manipulating and testing variables; this is contrary to qualitative studies,

which rely on minimal researcher intrusion (Yin, 2016). My intention for this study was

to collect descriptive data in their real-world context; I had no need to identify or

manipulate variables. In consideration of the previous statements, quantitative

methodology would have been a less effective avenue to gather and analyze data. The

purpose of this study was not to determine the effect of formative assessment on student

achievement, which would require experimental research, but to understand how teachers

at Hammond used formative assessment practices to help students meet learning goals so

that more consistent use of formative assessment could be achieved.

A qualitative approach offered the insights and depth of understanding I needed to

uncover teachers’ practices and perceptions regarding formative assessment use;

however, there are several methodologies to choose from within the qualitative tradition

such as case study, grounded theory, ethnography, phenomenology, and narrative

analysis (Yin, 2016). There were several reasons why I selected a case study

methodology instead of one of the other qualitative approaches. A grounded theory study
64
results in a theory about a phenomenon that develops from the data (Merriam & Tisdell,

2016). The purpose of this study, however, was to produce a rich, thick description of

formative assessment use at Hammond, not to propose a theory about its use. An

ethnographic study “strives to understand the interaction of individuals not just with

others, but also with the culture of the society in which they live” (Merriam & Tisdell,

2016, p. 24). Although I acknowledge that school culture may affect teachers’ formative

assessment use or perceptions at Hammond, the intent of this study was not to focus on

the culture of the school but rather on how teachers used formative assessment to help

students reach learning goals.

A phenomenological study centers on the common meaning of lived experiences

for a small group of individuals (Patton, 2015). Rather than collecting and reflecting on

the meaning participants make about formative assessment use, this study concentrated

on gathering data and reporting on how teachers implement and perceive formative

assessment. Finally, narrative analysis uses peoples’ stories to understand their

experiences (Merriam & Tisdell, 2016). Although some narratives may be part of the

data collection process to help participants describe their experiences with formative

assessment in the classroom, participant stories alone would not give the depth of

information needed for this study. From examining possible research approaches for this

study, I concluded that a qualitative case study would be the best approach and

methodology to yield the necessary information local school leaders would need to help

them make informed decisions about how to best support consistent formative assessment

implementation.
65
Participants

Hammond High, a large urban school located in the Midwest United States, was

the setting of this case study. Hammond had a staff of 24 classroom teachers, seven

special education certified resource teachers who assisted classroom teachers, and four

administrative school leaders (a head principal, assistant principal, dean of students, and

school improvement coordinator). All four of the school leaders held their positions for 2

or less years. Hammond predominantly serves at-risk minority students. The staff had

been struggling with student achievement-related issues for many years including low

state assessment scores and high failure rates.

The target population for this study consisted of 24 classroom teachers at

Hammond High School. The classroom teachers were all considered highly qualified to

teach in their subject areas, which meant that teachers had at least a bachelor’s degree,

possessed a state certification, passed basic skills and subject area examinations, and

taught within their majors or minors. According to the school administrator, there had

been a consistently high teacher turnover rate at Hammond for the past several years,

resulting in many new teachers in the building, a majority having less than 10 years of

teaching experience. Of the 24 classroom teachers working at Hammond during the

2017-2018 school year, 63% were returning teachers and 27% were teachers new to

Hammond; of the new teachers, 22% were first year teachers. The teachers’ ethnic

backgrounds consisted of 14 Caucasians, six African Americans, three Hispanics, and

one Asian. Of these teachers, 14 were female and 10 were male. Bachelor’s degrees

were held by 62% of the teachers, and 38% of the teachers had master’s degrees.
66
Participant Selection and Access

I purposefully selected participants from the target population of 24 high school

teachers at Hammond. The goal of purposive sampling, according to Yin (2014), is to

deliberately select participants who will yield ample pertinent data for a study. Merriam

and Tisdell (2016) added, “Purposeful sampling is based on the assumption that the

investigator wants to discover, understand, and gain insight and therefore must select a

sample from which the most can be learned” (p. 96). Because the purpose of the study

was to gain information about how teachers used formative assessment to check for

student understanding and to adjust instruction, I selected a heterogeneous sample. To

select participants who would yield the richest information to answer the research

questions, I gathered a sample of participants from both genders and a variety of grade

levels, subject areas, and years teaching. Patton (2015) and Creswell (2013) called this

purposeful sampling strategy maximum variation sampling. Maximum variation

sampling is based on the logic that “any common patterns that emerge from a great

variation are of particular interest and value in capturing the core experiences and central,

shared dimensions of a setting or phenomenon” (Patton, 2015, pp. 234-5). Likewise, Yin

(2014) argued that researchers should gather data from participants with different

perspectives to gain the best insights into a phenomenon. Therefore, I saw the value in

obtaining information about formative assessment use and perspectives from a wide

range of teachers at the local school. This heterogeneous sampling of teachers allowed

for a wider scope of data so that school leaders at Hammond might have a more

encompassing picture of formative assessment use in their school. As a result, school


67
leaders may make informed decisions regarding any needed instructional supports

regarding formative assessment.

I planned several procedures to ensure access to participants to collect data. I

began by completing a Request to Research form obtained from the local district. Upon

approval, I contacted the school principal by email and arranged a time to give a short

presentation about the study at a staff meeting. Because the purpose of the study was to

gain information about how teachers used formative assessment, I did not want only

teachers who consistently implemented this practice to volunteer. Therefore, I

emphasized at the presentation that teachers were needed whether they implemented

formative assessment regularly or not.

At the end of the presentation, I made my email address available so teachers

could contact me if they were interested in participating in the study. Volunteers who

contacted me were emailed an online form that asked demographic information such as

gender, number of years teaching, grade level(s) taught, and subject(s) taught. Even

though the study was about how teachers implemented formative assessment, the form

did not contain a question that asked teachers to gauge their formative assessment use.

The lack of knowledge about their formative assessment use allowed me to choose

teachers without regard to their formative assessment practices, yielding a more arbitrary

sampling. I selected a heterogeneous sample of participants from the volunteers, based

on the completed surveys, which represented a range of years teaching, grade levels

taught, and subjects taught. I contacted those participants selected for the study and gave
68
them consent forms. The signed consent forms are being kept in a locked file for 5 years,

at which time they will be shredded.

The following demographics represented the participants: five males and five

females; five under the age of 40 and five over; six had under 8 years of teaching

experience and four had over 8 years, with an average of 9 years teaching; all identified

themselves as Caucasian; eight earned a master’s degree, two possessed a bachelor’s

degree; and at least two classes from each grade level (9-12) were represented. Subjects

taught included science, language arts, mathematics, and social studies. There were one

advanced placement (AP) class, one special education class, and eight general education

classes represented. Four participants had also taught at a college before teaching high

school.

I used a sample size of 10 participants for the study (see Appendix B for

participant demographics). Patton (2015) asserted that qualitative studies do not have

rules for sample sizes; the size is based on attributes such as the purpose of the study, the

amount of in-depth information needed, credibility, time, and resources. Similarly,

Merriam and Tisdell (2016) affirmed that there is no particular set sample size for

qualitative studies and that it depends on an “adequate number of participants” needed to

answer the research questions (p. 101). Merriam and Tisdell (2016) also suggested that a

sample size depends on reaching saturation in the observations and interview responses;

therefore, the sample size needs to be based on the data collected during the study.

A sample size of 10 participants allowed me to gather in-depth data from each

participant and from multiple sources to help achieve saturation (Patton, 2015).
69
Saturation, in this case, meant that I gained no new insights from responses given to the

research questions (Merriam & Tisdell, 2016). For this study, a sample size of 10

participants was enough to gain the necessary rich, thick data for the study to be

beneficial for school leaders to make informed decisions about supporting consistent use

of formative assessment. If saturation was not achieved with 10 participants, then I was

willing to increase the sample size to collect more data. Patton (2015) called this kind of

flexibility in a study a part of the emergent design of qualitative research.

I understand the importance of the researcher-participant relationship in research.

Genuine relationships that allow for open and effective communication with participants,

according to Yin (2016), can be challenging, so the researcher must consider how he will

accomplish this rapport ahead of time. To help build a good working relationship with

participants, I began by establishing trust. I was forthcoming and honest with

participants during all stages of the study, ensured participants of confidentiality, and was

available to answer any questions or concerns. When questions arose, I followed the

advice of Yin (2016), who suggested the researcher should handle them “in a

conversational and friendly manner, as opposed to a tone that is formal, legalistic, or

defensive” (p. 51). Interactions with participants included individual interviews,

classroom observations, and discussions about teacher logs, and I considered each

participant’s schedule and availability (Yin, 2014). At all times, the participants were

treated with respect and their time was valued. Some participants knew me prior to the

study because I was previously a well-respected teacher in the school, whereas other
70
participants were hired after my tenure and required rapport-building. Trust and rapport

developed with all participants as I interacted with them throughout the study.

Ethical Protection of Participants

The ethical protection of all participants involved in a study is of utmost

importance (Merriam & Tisdell, 2016; Patton, 2015; Yin, 2014; Yin, 2016). Before I

collected any data from participants for this study, I obtained approval from Walden

University’s Institutional Review Board (IRB), a committee that provides a set of

guidelines to protect research participants (Approval Number: 11-21-17-0497717). Yin

(2014) called this approval “the most imperative step” before beginning a study (p. 78).

All participants who volunteered to participate in the study, in accordance with IRB

guidelines, received and signed an informed consent that detailed the purpose of the

study, potential benefits and risks, protection from harm, and confidentiality. I also

informed participants that if they felt uncomfortable or wished to exit the study at any

time, then they could do so with no penalties.

Researchers must also manage confidentiality of their participants (Yin, 2016).

All locations and people associated with the study were held confidential. I achieved

protection of individual participant identities by using titles such as “Participant 1,

Participant 2 . . .” instead of actual names during the coding process and analysis write-

up. I also used a pseudonym for the location. All data collected were securely and

confidentially handled according to Walden University’s IRB procedures. I kept all

written reflective journal notes and protocols in a three-ring binder in a locked cabinet.

After 5 years, they will be shredded. I scanned all the field notes as well as the reflective
71
research journal notes electronically (identifiable information was redacted) or typed

them into a computer document that was stored on a password-protected flash drive. I

securely stored the flash drive in a locked cabinet in my home office when it was not in

use. I will store the flash drive for 5 years after the completion of the study and then have

it destroyed.

Data Collection

Justification of Data Choices

I collected data for the study from three sources: classroom observations,

individual interviews, and teacher logs. These data points were among several

recommended by Yin (2016) for qualitative research and were chosen to yield the rich

information needed for this study. Because the purpose of the study was to gain insights

into how teachers at Hammond used formative assessment to check for student

understanding and to adjust instruction, data points that provided information about

teacher formative assessment practices and perceptions were necessary. I chose each

data point for the study because of its potential to produce the information needed to

answer the research questions. Observations permitted me to witness how teachers used

formative assessment strategies and tasks to check for student understanding. Interviews

allowed me to ask open-ended questions to (a) gain a deeper understanding about teacher

formative assessment use, (b) determine how teachers used formative assessment

feedback to adjust their instruction, and (c) gather perceptions that may be influencing

teacher formative assessment use. I recorded data from a single lesson period for each

participant during observations; however, the logs allowed me to obtain teachers’


72
formative assessment practices over a longer period. Teacher classroom documentation

in the logs provided me with insight into how teachers used formative assessment to

check for student understanding and to adjust instruction during their daily classroom

instruction.

The three data points addressed the qualitative research tradition of triangulation

that is used to help establish credibility (Yin, 2014). I chose the data points so that the

research questions could be sufficiently answered and the study’s purpose could be

fulfilled. Taken together, the three data points may contribute to the results needed to

help local school leaders make informed decisions regarding formative assessment

support, thus satisfying Patton’s (2015) claim that using multiple data points serve to

strengthen confidence in a study’s conclusion.

I took highly detailed and organized field notes during my study. Field notes can

include settings, observations, and direct quotations (Patton, 2015). I allotted time after

each observation and interview to review, verify, and enhance field notes as prescribed

by Yin (2016). Field notes greatly assist a researcher during data analysis (Merriam &

Tisdell, 2016). In addition, I wrote reflective journal notes throughout the research

process to record questions, insights, potential biases, and emergent themes (Creswell,

2013; Merriam & Tisdell, 2016). As Patton (2015) recommended, I also made notes

about my reactions, impressions, and interpretations.

Direct Observations

Direct observations provided an opportunity for me to identify how participants

implemented formative assessment strategies to check for student understanding during a


73
class period. Observations also allowed me to witness techniques the participants used to

adjust their instruction after collecting feedback about student understanding from a

formative assessment. Yin (2016) called observations in qualitative studies “an

invaluable way of collecting data” that should be “highly cherished” because the

information is being sensed directly by the researcher and not filtered through another

person’s point of view (p. 143). In other words, observations are a firsthand account

rather than a secondhand narrative (Merriam & Tisdell, 2016). According to Patton

(2015), direct observations are also valuable because they allow the researcher to (a)

more deeply understand the context of the case, (b) be more open and inductive, (c)

notice nuances in which the participant may not be aware, (d) learn things that may not

be comfortable for the participant to discuss during an interview, and (e) form

impressions that can be used to help understand the participants and their settings more

fully.

To ensure that the data gathered during the observations were focused and aligned

with the research questions guiding the study, I developed an observation protocol (see

Appendix C for observation protocol). An observation protocol is a predesigned form

often used in qualitative case studies that allows researchers to organize and record

specific data and provides a space to record descriptive and reflective notes (Creswell,

2013). The categories on the protocol aligned with information addressed in the literature

review and allowed me to collect data to help understand teachers’ formative assessment

use at Hammond. Data collected for the protocol included information about (a) the

details of the setting, (b) the formative assessment strategies implemented to check for
74
student understanding, (c) when the strategies were implemented during the instructional

period (d) the breadth of feedback the teacher elicited about students’ current

understanding during the formative assessment task and (e) instructional adjustments

observed due to student feedback from the formative assessment tasks. These five

observation categories helped me answer the first two research questions regarding how

teachers use formative assessment to check for student understanding and to adjust

instruction.

Three colleagues vetted the observation protocols prior to its actual use (Yin,

2016). The first colleague had an advanced degree in education, was a reading

interventionist in a large urban school district for 13 years, and for the past 8 years has

been a reading specialist, consultant, and literary coach for teachers. The second

colleague had an advanced degree in education and leadership, taught Language Arts for

8 years, was a high school principal for 11 years, and for the past 14 years has been an

educational consultant. The third colleague had an advanced degree in education, was an

English teacher and reading specialist for 20 years, and currently supports students and

teachers in the area of literacy at a school she founded. I made changes based on my

colleagues’ recommendations.

Although using protocols can be a useful means to focus an observation (or

interview), Yin (2016) warned researchers not to let protocols undermine their study by

restricting data collection. Because of the possibility of the protocol to limit data

collection, I kept “an open mind to capture properly a field perspective and to attend to

emerging and unexpected information” during the observations (Yin, 2016, p. 107).
75
Observations have many strengths and can yield valuable data during a qualitative

study; however, observations do have some drawbacks. A few weaknesses include the

possibility that (a) the researcher may affect the participant’s behavior, (b) the

researcher’s perception may affect the data recorded, (c) the observational data are

limited to what is observed during a given time period (d) the activities observed may not

be typical of an average classroom lesson, and (e) the researcher can only observe

behaviors, not what the participant is actually thinking (Patton, 2015). I collected teacher

logs and conducted individual interviews in combination with observations to help

address some of these issues as well as to verify findings.

Interviews

Interviews were a second source of qualitative data. The purpose of conducting

interviews is to discover what cannot be directly observed, such as thoughts, feelings, or

perspectives, by asking participants questions (Patton, 2015). I chose to conduct semi-

structured interviews. In semi-structured interviews, the researcher develops a list of

questions in advance that contains specific data needed from all participants; however,

there should be a mix of structured and unstructured questions to allow for flexibility

(Merriam & Tisdell, 2016). Yin (2016) described the flexibility of questioning as a

customization of the interview to each participant. The openness of the questions in a

qualitative study, as opposed to close-ended nature of questionnaires and surveys of

quantitative studies, allows the researcher to “capture the complexities of individual

perceptions and experiences” and permits the participants to “express their own

understandings in their own terms” (Patton, 2015, p. 353). As a result of semi-structured


76
interviews, I gathered rich, thick data of participants’ explanations about their formative

assessment practices.

Like the observations, I developed a protocol to ensure that the data gathered

during the interviews were focused and aligned with the research questions guiding the

study. An interview protocol is a guide that contains questions or prompts that the

researcher will use during the interview (Yin, 2016). All interview questions were

crafted with the purpose of the study and research questions in mind. I had the interview

protocol vetted by the three colleagues who reviewed the observation protocol, and I

made changes based on their recommendations. Questions included (a) Do you ever use

formative assessment to check for student understanding in class? If so, please give

examples and explain. (b) Discuss how often you typically check for student

understanding and why. (c) At what point(s) during a lesson/class period do you typically

use formative assessment to check for student understanding and what is the reason(s)

you use formative assessment at this time? (d) When you want to check for student

understanding, how do you decide what strategy to use? (e) Do you ever adjust your

instruction as a result of student feedback from formative assessment? If yes, how so?

(see Appendix D for interview protocol). Questions such as the latter provided insights

into how teachers used formative assessment to check for understanding and to adjust

their instruction.

I also developed several interview questions to answer the third research question,

“What are teachers’ perceptions of formative assessment to check for understanding and

to adjust instruction?” Questions included (a) In your own words, how would you define
77
formative assessment? (b) Do you believe there are benefits of regularly using formative

assessment to check for student understanding? If so, what are they? (c) Who should be

checked in class for their understanding of a lesson’s learning goals/targets, when should

they be checked, and how often should they be checked? (d) Are there challenges that

keep you from using formative assessment to check for students understanding and to

adjust your instruction with more fidelity? If so, what are they? (e) What instances or

circumstances might cause you to use formative assessment (to check for student

understanding and to adjust instruction) with more fidelity in your classroom? The

answers to these questions provided important insights into teacher formative assessment

use. The success of an interview relies not only on the quality of the pre-developed

questions and prompts, but also on the probing questions that can elicit more details from

the participants (Merriam & Tisdell, 2016). Therefore, I asked questions when necessary

to develop answers and gain a greater understanding of participants’ perceptions and

behaviors regarding formative assessment use.

I scheduled interview times with participants via email. Interviews were no more

than 60 minutes long and took place in a comfortable and convenient location of the

participant’s choosing. I encouraged participants to be open with their responses and

reminded them that their answers were confidential. I paid special attention to keep the

interview conversational, to be nondirective, and to stay neutral (Yin, 2016). Because

verbatim responses from participants during the interviews were essential to data

analysis, all interviews were audiotaped (with permission) and later transcribed (Patton,

2015). During the interviews, I also notated participant responses, gestures, and other
78
nonverbal feedback (Patton, 2015). After each interview, I added details and made notes

in my reflective research journal of any connections, impressions, and developing ideas

(Yin, 2016).

Logs

To gather evidence that yielded greater insights into teachers’ daily formative

assessment practices, I requested that participants keep a classroom log. Yin (2016)

recommended this type of field-based documentation as another means to collect data

during a qualitative study. I made all participants aware of the obligation to keep a log

when they volunteered for the study. In the logs, teachers recorded information about

their formative assessment use to check for student understanding and to adjust

instruction. I developed the log for participants to capture their classroom data so that I

could gather more information about the research questions. The logs also served as a

triangulation data point (Creswell, 2013). Because observations of participant classrooms

only produced data about how they checked for understanding and adjusted instruction

during one class period, the logging was necessary to gather more information about the

depth and breadth in which teachers used these two practices. Information gathered from

the logs, along with the observations, interviews, and reflective research journal notes,

helped validate the data (Yin, 2016).

The log contained questions that assisted in answering the first two research

questions. The questions included information about (a) the formative assessment

strategy used to check for student understanding, (b) whether the formative assessment

was planned prior to the lesson or was unplanned, (c) when the strategy was given
79
(before, during, and/or after student learning), (d) the depth and breadth in which teachers

elicited current student understanding, (e) what was learned as a result of the information

gathered from the formative assessment, and (f) how, if at all, participants used or

planned to use the feedback from the formative assessment to adjust instruction (see

Appendix E for Teacher Classroom Formative Assessment Log). The same three

colleagues who reviewed the observation and interview protocols vetted the log. I

understand the value of teachers’ time, so I created the log to be the least intrusive and

time-consuming as possible. My colleagues verified the log’s ease of use and made

recommendations regarding its design and content.

Participants received a copy of the log after their interviews. I explained the

purpose of the log, provided the definition of formative assessment used in this study for

clarification, and gave a sample log for reference. I asked participants to document

information about their formative assessment use on the log for one class period of their

choosing for 3 consecutive school days. I originally planned to ask participants to fill out

the log for 5 consecutive days, but based on the time of year I was conducting my

research, I decided that 3 days would be more reasonable for teachers. I determined that

this amount of time for teachers to log their formative assessment use would still provide

me with sufficient data. I advised participants not to change their normal teaching

practices so that results could be authentic. Participants were asked to contact me by

email within three weeks to collect their logs. I transferred the data from the logs onto an

electronic document. The original paper copies are being kept in a three-ring binder in a

secured cabinet for 5 years and then shredded.


80
Participants were contacted via email to set up a date and time for me to conduct

their classroom observation and interview. I chose to conduct the observations before the

interviews so that teachers’ classroom behaviors would not be influenced by interview

questions about their formative assessment practices. Each participant received an email

reminder a day prior to their observation and interview times. Direct observations took

place for one 55-minute class period in each of the 10 participants’ classrooms. Before

the observation, I informed the participants that all data would remain confidential. I

encouraged them to plan, implement, and deliver their lesson as normal during the

observation. Taking the role of observer-participant, I recorded data by taking notes on

my observation protocol.

Researcher’s Role

I taught at Hammond High School as a mathematics and biology teacher but

resigned from the district 2 years prior to the study to pursue other educational interests.

Therefore, I held no current supervisory role with the participants, and my previous

relationships at the school did not affect the outcome of the study. While teaching at the

local school, I worked as a mentor for student teachers and a facilitator on the school

improvement team. These roles led to experiences in classroom observations and

instructional conversations that were helpful during data collection for this study. A

reflection on my experiences at Hammond, and within the district, revealed that even

though formative assessment practices were regularly encouraged by school leaders,

actual teacher implementation in the classroom was seldom examined. My reflections,

along with the knowledge of consistent student achievement-related problems at the


81
school, prompted me to conduct this study. Furthermore, I had heard many school

leaders and teachers describe formative assessment practices that were summative in

nature. Knowing that teachers have varying and diverse definitions of formative

assessment may indicate a bias on my part. However, I examined the literature and

designed my study in a way to minimize any potential bias including asking several

educators to review my data collection instruments prior to their use and keeping a

journal to perform regular self-reflections during the study.

I understood the importance of being aware of any biases, assumptions, and

previous experiences that could influence my research; therefore, I carried out the study

in an ethical and methodic manner. I had observation protocols and interview questions

vetted for prejudice, I cross-checked data for consistency among the three sources, and I

performed regular self-reflections (Yin, 2016). I also examined my reflective research

journal notes and did not note any emergent biases based on my work. Merriam and

Tisdell (2016) stated that this self-awareness, or reflexivity, is an important component

for a study’s credibility.

Data Analysis

Qualitative data analysis is an inductive process of simplifying and making sense

of the collected data (Merriam & Tisdell, 2016). Although I documented initial data

analysis, such as emergent understandings and insights, in my reflective research journal

during the data collection stage of the study, final analysis began after the data collection

was completed (Patton, 2015). I followed the five-phase cycle of data analysis for

qualitative research recommended by Yin (2016): (a) compiling, (b) disassembling, (c)
82
reassembling, (d) interpreting, and (e) concluding. The phases are laid out sequentially,

but the actual analysis process is not linear in nature; the phases have “recursive and

iterative relationships” (Yin, 2016, p. 187).

Research Questions

In alignment with the framework for this study, I set out to understand formative

assessment use at Hammond to check for student understanding and to adjust instruction

more deeply. I developed the following research questions to guide my investigation:

RQ1: How do teachers use formative assessment to check for student

understanding of state and district learning goals?

RQ2: How do teachers use student feedback collected during formative

assessment to adjust their instruction?

RQ3: What are teachers’ perceptions of formative assessment to check for

understanding and to adjust instruction?

Answers to RQ1 and RQ2 were generated from interviews, observations, and teacher

logs; answers to RQ3 were gathered from interviews. Reflective research journal notes

contributed to all three RQs by providing documentation of insights and emergent ideas

throughout the data collection process. During interviews, I posed questions designed to

promote a greater understanding of teachers’ formative assessment use to check for

understanding and to adjust instruction as well as to gain insights into perceptions that

might influence implementation. The questions gave participants an opportunity to

demonstrate their knowledge of formative assessment, share how they used formative

assessment strategies to check for student understanding, discuss how they adjusted
83
instruction as a result of formative feedback, and reveal factors that influenced their

formative assessment use. Observations allowed me to witness formative assessment use

to check for student understanding and understand how participants adjusted instruction

based on student feedback. I also noted the extent participants implemented each of these

formative assessment practices; for instance, whether participants elicited responses from

a few students or the entire class, or how regularly they assessed students or adjusted

instruction. Participant logs permitted me to gather additional information about how

teachers used formative assessment to check for understanding and to adjust instruction.

All data collection instruments allowed me to engage with participants on a deep and

meaningful level to gather rich data about their formative assessment practices and

perceptions.

During the initial phase of data analysis, I compiled and organized all data from

the observations, interviews, and teacher logs (Yin, 2016). I made verbatim transcripts

from listening to audiotapes of the interviews and typing them into the first column of a

three-column computer document. The second column consisted of descriptive and

reflective research notes, and the third column was used for coding, which is described in

the next section. Observation and teacher log data were typed into a computer document

in a similar manner. Patton (2015) recommended that researchers transcribe their

interviews and type handwritten field notes or other collected data so they can become

fully immersed in the data, which may lead to valuable insights. I recorded additional

insights that I uncovered while transcribing and rereading data in my reflective research

journal.
84
Coding

The coding process is a way for researchers to organize data “from the ‘bottom

up,’ by organizing the data inductively into increasingly more abstract units of

information” (Creswell, 2013, p. 45). I used open and axial coding to identify central

themes that emerged from the data while staying grounded in the conceptual framework

of the formative assessment theory posited by Black and Wiliam (1998b). The coding

process involved reducing data into smaller pieces and then rebuilding the data into larger

categories based on common patterns. During the disassembling phase of data analysis, I

assigned codes to fragments of data (Yin, 2016). Data from observations, interviews, and

teacher logs were reread line by line and small pertinent segments of data were

highlighted, coded, and recorded in the third column of a document. I noted emerging

larger categories as they developed. Merriam and Tisdell (2016) referred to the initial

coding of data as open coding. In the next phase, I used axial coding to reassemble the

data from individual codes into broader groups (Yin, 2016). Codes from the first phase

were color-coded according to larger categories to provide both organization and

thoroughness during data analysis. During axial coding, I continually organized data by

comparing, modifying, and reshaping codes into more coherent groupings (see Table 1).

The categories that emerged from the data supported the research questions. Lastly, I

evaluated the data across the categories and several overarching themes emerged. The

themes captured recurring patterns found across all the sources of data (Patton, 2015).

Emergent themes can be susceptible to researcher bias during the reassembling phase of

analysis. Therefore, as suggested by Yin (2016), I made constant comparisons between


85
data and sources, watched for negative instances (discrepant data), and engaged in rival

thinking (looking for alternative explanations). A thorough analysis of data resulted in

four inductively developed themes: implementation, the feedback cycle, knowledge and

beliefs, and barriers and supports.

Accuracy and Credibility of Findings

Yin (2016) discussed the importance of building trustworthiness and credibility

into qualitative research and recommended researchers use transparency, methodic-ness,

and adherence to evidence. Based on Yin’s (2016) objectives, I thoroughly described and

documented research procedures, and all data were available to participants, colleagues,

and the doctoral committee for review. Transparency allowed for scrutiny of my work

and refinement of my results. I used a carefully planned methodical approach for all

research processes to establish a rigorously conducted study. The systematic approach

involved regular self-reflection to avoid bias. A reflective research journal was kept to

document emerging ideas and interpretive commentary. I collected detailed data, tested

the data for consistency from multiple sources, and objectively used the data as evidence

to draw conclusions for the study. I also established saturation after noticing a reiteration

of themes during my analysis.

To further establish accurate and credible findings, I used triangulation, face

validity, and member checks. Triangulation, a strategy that involves collecting data from

a variety of sources and methods, strengthened the credibility, reliability, and validity of

the study (Yin, 2014). Multiple data points—observations, interviews, and logs—were

used to triangulate the data to determine overall consistency of emergent patterns and to
86
confirm my findings (Merriam & Tisdell, 2016). While developing the observation and

interview data collection instruments, I established face validity by having several

colleagues with advanced educational degrees vet the protocols. They examined the

phrasing of questions, unwanted bias, research question alignment, and participant

usability. I used the feedback to revise and improve the protocols; therefore,

strengthening the data collected as a result (Yin, 2016). Finally, through a process called

member checking, I asked participants to review the findings for accuracy of their data

(Merriam & Tisdell, 2016). Participants were emailed a summary of the findings in a

Google document to review. This method allowed for convenience and ease of providing

feedback. The member checks ensured my findings were not biased and confirmed

participant responses were accurately represented.

Discrepant Data

Credibility also includes representing and discussing inconsistent data (Patton,

2015). Patton (2015) claimed that understanding contradictions in data or among data

sources could be “illuminative and important” and “offer opportunities for deeper

insight” (p. 553). Data gathered from observations, interviews, and logs helped

determine if uniformity existed among the triangulated data. Analysis of observational

data resulted in the emergence of two themes: implementation and the feedback cycle.

Interview data analysis resulted in four themes: implementation, the feedback cycle,

knowledge and beliefs, and barriers and supports. Analysis of log data resulted in two

themes: implementation and the feedback cycle. Therefore, the three data sources

showed a high degree of agreement. Implementation and the feedback cycle themes
87
emerged from all three data sources. The additional two themes that emerged from the

interview data, knowledge and beliefs and barriers and supports, helped me gain insights

into participants’ perceptions of formative assessment.

There were a few instances of inconsistencies within the data sets concerning

Participant 3. The participants in the study taught a range of subjects, grade levels, and

learning abilities. Included were one advanced placement class, one honors class, seven

regular education classes, and one special education resource classroom. Embracing data

from a variety of classes allowed me to collect information from a heterogeneous sample

to provide a more thorough understanding of teacher formative assessment use at

Hammond. Even though there were many similarities in the physical and instructional

organization of the classrooms, some aspects of Participant 3’s special education class

differed from the others. Participant 3 had only eight students in class as opposed to an

average of 23 students in the other classrooms, which allowed her to more readily collect

feedback from all students. She, unlike the other participants, reviewed and collected

warm-ups from each student. Half of Participant 3’s class time each day was allotted for

remedial instruction and study skills. During the other half, students received assistance

on assignments from other classes—similar to a study hall but with individual teacher

tutoring as needed. Therefore, there was less time for Participant 3 to implement

formative assessment practices, which resulted in less formative assessment data

collected during observations and recorded in the 3-day log. During the interview,

Participant 3 was also the only participant who could not correctly define any

components of formative assessment stating, “Um, FA assessment? I’m not really sure.
88
Would that be assessment based on individual needs or whether their levels of

functioning are met?” Participant 3 also said that she did not know if she used formative

assessment in class. Because I observed this participant using formative assessment

strategies, I briefly summarized for her what was meant by formative assessment. The

participant did not realize formative assessment was the term for the instructional

strategies she implemented. Participant 3 might not have been familiar with the

terminology because this was her first year of teaching following a 15-year hiatus. Data

collected from this participant did not affect the themes or overall study findings.

There were some inconsistencies between log, interview, and observational data

regarding the amount of student feedback teachers elicited. Participants reported in their

logs that they collected student feedback from most or all of their students during

formative assessment. However, during interviews, participants often mentioned their

frustration with regularly receiving formative feedback from only a few students at a

time. Likewise, observational data showed that participants rarely asked for or received

feedback from more than a couple of students during a class session. There were only

two instances observed where participants asked for formative feedback from all

students; however, less than half of the class responded. Instead, participants collected

formative feedback from only a few students throughout the class session. In addition, 9

out of 10 participants cited lack of student participation during formative assessment as

one of their top barriers to implementation. However, under the log heading “With this

formative assessment, I checked the understanding of (all, most, few, one, no) students,”

most participants indicated that they checked “all” or “most” of their students’
89
understanding during the formative assessment strategy they recorded. One possible

explanation for the discrepancy was that participants might have interpreted the question

to mean with what number of students they used the formative assessment strategy, not

the number of students from which they actually elicited feedback.

I also noted a discrepancy regarding warm-ups when comparing log, interview,

and observational data. All participants who had recorded warm-ups as a strategy on

their log wrote that they gave the formative assessment before learning. By writing

“before,” participants indicated that their warm-ups were used to check for student

understanding before learning took place. In other words, they used formative

assessment to pre-assess what the students knew before starting the lesson. Yet, in the

observations and interviews, participants overwhelmingly used, or stated they used,

warm-ups after learning took place to check for student understanding of concepts

previously taught. One explanation for the discrepancy might be that participants were

confusing the term “before” to mean “I gave the formative assessment before I began the

day’s lesson” (at the beginning of class) rather than “I gave the formative assessment

before I taught students the concepts.”

Data Analysis Results

Local evidence showed that, despite administrative encouragement, there was a

lack of consistent implementation of formative assessment to help students meet learning

goals. The purpose of this study was to examine how teachers used formative assessment

to check for student understanding and to adjust instruction so that school leaders at

Hammond could make informed decisions to support its consistent use. Data were
90
collected from observations, interviews, and teacher logs. A thorough analysis resulted

in four inductively developed themes: implementation, the feedback cycle, knowledge

and beliefs, and barriers and supports. The themes and supporting categories that

emerged from the data are shown in Table 1.

Theme 1: Implementation

One emergent theme of the study was teacher implementation of formative

assessment to check for understanding. This theme included the following subcategories:

use and communication of learning goals; formative assessment strategies used and the

reasoning behind their selection; and details on how teachers implemented warm-ups,

exit slips, and formative questioning—the three main formative assessment strategies

used at Hammond. The implementation theme aligned with RQ1 to give insights into

how teachers used formative assessment in the classroom. Wiliam (2018) recommended

that teachers consider three essential questions during formative assessment

implementation: Where are my students going? Where are my students right now? What

do I need to do to get my students there? In other words, teachers should ask themselves

what learning goals do I want students to know, what are my students’ current levels of

understanding, and what can I do to help bridge the gap between current student

understanding and the learning goals they must meet? Therefore, an important part of the

formative assessment process, and one with which teachers must begin, is to determine

what learning goals students must understand. Without establishing clear learning goals,

aligning formative assessment strategies with the goals, and communicating goals with
91
students, formative assessment implementation cannot be fully realized (Fisher & Frey,

2014a).

Learning goals. During my observations, I noted evidence of student learning

goals for the lesson. Seven participants posted learning goals (called learning targets at

Hammond) visibly in the classroom. Participant 1 asked students to write down the

learning target from the board onto an agenda sheet, and Participant 8 had the learning

target written on the board and typed on top of the students’ note sheets. Only Participant

10 reviewed the lesson’s learning target with the students at the beginning of class. Of

the seven participants with visible learning targets, three posted vague outcomes that

were not measurable and four had specific content-related outcomes. For example,

Participant 9’s learning target was ambiguous: “I can look at the history of Latin

America,” as opposed to Participant 8’s detailed learning goal which asked: “How do we

use our knowledge of points to identify segments, rays, and intersections?” Participants

5, 6, 8, and 10 had learning targets posted in student-friendly language; the other six

participants either had no learning target evident, or the target was written in the form of

a content standard. Even though some evidence of learning targets was apparent in most

classrooms, only Participant 10 mentioned learning targets (three times) during the

interview when discussing his formative assessment practices. As Fisher and Frey (2014)

asserted, learning targets are an important component of the formative assessment

process; teachers and students must know the aim of the lessons so that student

understanding can be measured.


92
Table 1

Inductively Developed Thematic Categories

Open codes Axial codes Themes


Learning goals/targets Factors of FA
How FA used implementation
Reasons for FA strategy choice
When FA used

FA strategies used FA strategies Implementation


Warm-ups implemented
Exit slips
Formative questioning

How student feedback elicited Student feedback to


Student participation in FA teacher

Instructional adjustments used Teacher use of student


How adjustments implemented feedback The feedback cycle
When adjustments made

Rechecking after adjustments Rechecking student


understanding
Defining FA Knowledge about FA
Examples of FA
Perceived knowledge and use of FA Knowledge and
beliefs
Beliefs about FA use Perceptions and beliefs
Benefits of FA about FA

Student participation during FA Barriers to FA


Time issues implementation

Student feedback/elicitation Barriers to adjusting


Time instruction
Knowledge Barriers and supports

Student participation/feedback Supports to FA


Classroom management implementation and
Schoolwide initiatives adjusting instruction
Collaboration
Professional development

Note. FA = formative assessment.


93
Formative assessment strategies. All participants reported that they used

formative assessment strategies to check for student understanding of learning goals. I

invited participants to give examples of strategies and to explain how they implemented

them. Formative assessment strategies cited were formative questioning (ten

participants), warm-ups (nine participants), quizzes (eight participants), exit slips (seven

participants), walking around observing or talking with students (five participants),

homework checks (four participants), and reading comprehension (two participants).

Other formative assessment strategies stated were choral response, learning logs, Kahoot,

Quizlet, and pair-share (Participant 1); one-to-five scale (Participant 2); timed readings

(Participant 3); muddiest point (Participant 5), and “thumbs up/down” (Participant 8).

Only Participant 1 mentioned using technology to formatively assess students. She stated

that she frequently used the web-based software Kahoot and Quizlet to formatively assess

for student understanding. Participant 1 said, “If we do these [technology assessments]

as a class, when they choose incorrectly, I am able to immediately explain why

something is not the answer. It can help me clear up misunderstandings.” A few

participants also mentioned that Hammond had purchased classroom sets of clickers

several years ago. Clickers are a technology in which students use hand-held devices to

record responses that are immediately transmitted to the teacher via a software program.

Participants often used formative assessment to check for student understanding

during class. Nine participants specified that they implemented formative assessment

strategies daily in some manner. During observations, participants averaged using three

formative assessment strategies per class session. According to the logs, participants
94
implemented a total of 63 formative assessment strategies during a total of 30 classes,

thus averaging two formative assessment strategies per class session. Participants who

regularly used formative assessment stated that the practice allowed them to stay

informed about whether or not students understood what was being taught. Participant 1,

who implemented several formative assessment strategies daily, passionately stated, “I

have no idea how anyone could teach the next day, let alone the next lesson in a unit,

without knowing and fully understanding where their students are at in the learning

process several times each hour.” Likewise, Participant 10 said he uses formative

questioning constantly throughout the class period and tries to stop every 20 minutes to

do more targeted checks to ensure students understand “what is going on” and that they

are “not getting lost.” The two participants who used formative assessment strategies the

least stated, “I do at least one formative assessment a day, the warm-ups, because the

school wants us to do that. I ask questions to everyone pretty much every day too”

(Participant 3) and once or twice a week “feels like a good amount for me to know what

is going on with students” (Participant 7).

Observational data showed that participants implemented five main strategies to

check for student understanding: formative questioning, warm-ups, quizzes, exit slips,

and walking around observing or talking with students. These five formative assessment

strategies aligned with the strategies that participants reported using in class during their

interviews and on their logs. Participants revealed three main reasons they chose

particular formative assessment strategies: (a) they were most comfortable using these

strategies, (b) they were quick and easy to use, and (c) they were feasible to implement
95
given student behavior issues they often encountered. Only Participants 1 and 6 said they

chose strategies with consideration of which one would best show student understanding.

To gain a better understanding of how teachers used formative assessment in the

classroom (RQ1), I discussed in this section the three most implemented formative

assessment strategies at Hammond: warm-ups, exit slips, and formative questioning.

Detailed data collected provided valuable insights into how participants used these three

formative assessment strategies—information that may be beneficial for school leaders

when planning how to support its consistent and accurate use.

Warm-ups. Warm-ups are short formative assessment tasks given at the

beginning of class to check for student understanding about a past or upcoming lesson.

At Hammond, warm-ups were also known as bell-ringers or kick-offs. There was a high

amount of warm-up use at Hammond, which reflected participants’ reports that warm-ups

were a schoolwide initiative established at the beginning of the year. When probed about

administration’s purpose of this initiative, three main answers unfolded: (a) a way to

check if students understood the material covered in class, (b) a management tool to get

the classroom organized and started, and (c) an uncertainty of the purpose. Eight

participants stated in the interviews that they regularly gave students warm-ups, nine

participants implemented a warm-up during observations (Participant 10, who did not

implement a warm-up, mentioned he often had students complete a current events video

warm-up each day to assess summarizing skills); and participants recorded 20 instances

they used warm-ups in their logs (out of 30 total classes logged). Participants did not

always use warm-ups as a formative assessment. Some participants used them to have
96
students complete a managerial task (take out your computer and log in), a personal

reflection (what does this quote mean to you?), or as a classroom community builder

(write something that you want to share about your weekend). Only warm-ups with a

formative purpose were included in the data.

All participants stated that they planned their warm-ups, and they primarily used

them to check for student understanding of facts, skills, or concepts that students learned

in the previous lesson. Several participants mentioned that understanding the previous

lesson was important to understanding the new lesson. When I probed participants about

how they decided which questions to include on the warm-up to check for student

understanding of a previous lesson, only Participant 10 said that he developed questions

to determine if students understood the lesson’s learning target. No participants

mentioned that their warm-up questions were based on student feedback from a previous

formative assessment, such as an exit slip. Participant 1 stated, “I use what I feel students

didn’t get the day before,” and similarly Participant 5 said, “It [question on the warm-up]

is mostly an intuitive feeling, I don’t necessarily have data.” I also noted that participants

gave students ample time to complete their warm-ups. Students were given between one

and four warm-up questions in which they had an average of 16 minutes to complete.

Participant 4 used a five-minute timer for students to complete a one-question warm-up.

Also, most participants completed managerial duties, such as attendance and paperwork,

while students did the warm-ups. Of the nine participants observed implementing a

warm-up, seven participants did not check any students’ answers before reviewing the

answers aloud. Only Participant 5 walked around assessing student answers while they
97
worked on the warm-up. The participant gave students feedback such as, “It looks like

you are on the right track, keep going” and, “Look at number two again, what is the

question actually asking you?” Participant 5 also used feedback gathered from observing

student answers on their warm-ups to address misunderstandings with the whole class.

After the allotted warm-up time, most participants either gave students the answers, or

they asked for volunteers to answer the questions. Observational data indicated that the

few students volunteering answers were mostly giving correct ones. Participants spent

adequate time answering the warm-up questions, often asking formative questions during

the process.

Students often copied answers to formative questions on the warm-up instead of

writing their own answers. In seven classes, participants wrote the warm-up answers on

the board, and approximately one third of the students in each class wrote the answers on

their papers. Participant 4 and 9 admitted they often saw students copying the answers to

the warm-ups. After the warm-up, students in eight of the classes were instructed to keep

their work in their folders. Interview data showed that participants collected the warm-up

papers on Fridays. Most participants reported they did not look at the warm-up answers

but rather gave students credit for submitting them. Participant 9 illustrated the warm-up

implementation process, “They do it, we correct it in class most of the time, depends on

timing, I often write the answer on the board after I give them some time. Then they put

it back in their folder and get participation points for them at the end of the week.”

Exit slips. Exit slips are short formative assessment tasks given at the end of a

class to check for student understanding of what was taught during the lesson. Exit slips
98
are also known as exit tickets. Exit slips, like warm-ups, were a commonly used

formative assessment at Hammond. During the interviews, six participants stated they

used exit slips frequently. Observations revealed five participants implemented exit slips,

and two participants had planned to implement them but ran out of time (Participants 5

and 10). In the logs, participants used exit slips in 10 out of the 30 total classes.

Participant 7, who only used them occasionally, stated, “I do like exit tickets, I don’t do

them every day, I know I should.” Some reasons participants reported that they did not

implement exit slips more often included, “I always run out of time in class to give

them,” “I don’t have time to look at every exit slip every day to see what students are

understanding,” and, “Many students do not take them seriously or do not do them.”

Participant 1 stated, “This is a strategy I need to improve, I would like to find how to use

the feedback.” Participant 4 admitted, “I find it hard to get data with this strategy—to

have the answer actually be their own thoughts instead of just copying down their

neighbor. I hear them say, ‘What did you write?’ So, it is not a real good way to find out

what students know.”

Most participants said that they planned their exit slip questions. The one

exception was Participant 6 who stated exit slip questions were “usually based on the

discussion I heard in class during the lesson. They are usually unplanned. Sometimes I

think that they missed something during instruction that I think they need to go back and

think about.” Participant 6 prepared nine different baskets with half sheets of various

types of exit slips. She would decide which slip worked best for students to show their

understanding of the exit question she chose for the day. The other participants’ exit slips
99
consisted of a question or two written on the board; most provided space on students’

warm-up sheets to answer exit slip questions.

Participant implementation of the exit slips mirrored implementation of the warm-

ups; they rarely checked student answers (Participant 2 and 6 looked at a few students’

papers when they asked questions). Most participants gave students the answers to the

exit slip questions during class. If participants asked students for answers, they would

receive responses from one or two students, again giving them limited feedback. On

average, approximately 58% of the students participated in the exit slip tasks. Many

students waited to copy the correct answers from the teacher instead of completing the

questions themselves; therefore, these students’ papers would show answers indicating

that they understood the content. In four of the five classes where participants gave an

exit slip, students copied answers from other students. For example, in Participant 4’s

classroom, many students were not participating in the lesson activity. During the exit

slip task, which required them to summarize what they learned from the activity, a group

of students who did not participate during the lesson asked a student who participated for

the answers. In most classes, students were instructed to place their exit slip answers into

their folders. Like the warm-ups, participants collected students’ exit slip papers at the

end of the week for classroom participation points. Only Participant 3 and 6 collected

exit slips daily. However, Participant 3 answered the exit slip questions with the class

prior to collecting them, so all student answers would be correct and not useful for

assessing understanding.
100
Formative questioning. Formative questioning is a formative assessment

strategy by which teachers ask students questions specifically to check for understanding

of content so they can make instructional decisions to improve learning (Jiang, 2014).

Formative questioning was the formative assessment strategy used most often to check

for student understanding at Hammond. Other studies have also shown questioning is the

main strategy teachers use to check for student understanding (Fisher & Frey, 2014a;

Heritage & Heritage, 2013). During the interviews, all participants discussed using

formative questioning regularly to check for student understanding; likewise, all

participants used formative questioning during the observations. In the teacher logs,

participants recorded using formative questioning in 13 out of the 30 total classes.

Even though all participants used formative questioning during instruction, the

way they implemented this strategy varied. Participants 5 and 10 asked formative

questions during class regularly; Participants 1, 6, and 8 often; Participants 2, 4, and 9

occasionally; and Participants 3 and 7 rarely. Some formative questions participants

asked elicited more insight into student understanding than other questions. Participants

primarily asked low-level questions that were intended to elicit a right or wrong answer.

This finding reflects Jiang (2014) and Staunton and Dann (2016) who found that teachers

predominantly ask low-level (convergent) questions and struggle to use high-level

(divergent) ones. Low-level questions, which are often recall or factual, help teachers

determine student understanding, but only at a surface-level; high-level questions are

needed to uncover deeper student understanding. For example, Participant 7 asked the

class, “What is the definition of a ray?” “Name a ray in the picture,” and, “Are BA and
101
BD opposite rays?” Students answering these questions only gave short right or wrong

answers. In contrast, examples of high-level questions for the same questions might be:

“BA is an example of a ray, why?” “How does a ray differ from a line segment?” and

“Why are BA and BD called opposite rays?”

Data showed that participants often continued with the lesson when they heard

correct answers from one or two students. For example, Participant 5 asked, “If an atom

gives away an electron, will it be more negative or more positive?” A few students

shouted “positive.” The participant responded, “Great!” Later in the lesson, it became

evident that students did not fully understand why the atom became more positive as they

struggled with changing atoms to cations and anions. Many students asked questions that

required the participant to spend time reviewing multiple examples. Only Participants 4,

9, and 10 (30% of the sample size) asked a mixture of low-level and high-level questions.

This finding corresponds to that of Kira et al. (2013) who determined that only 20% of

teachers were found to have balanced low-level and high-level questions. In addition,

eight participants acknowledged that the formative questions they asked were unplanned,

often called ‘on-the-fly’ questioning (Chappuis, 2015). They planned warm-ups and exit

slips but developed formative questions during the lesson when they saw a need. As

Participant 6 stated, “Most questions are done on the fly, like looking out and seeing

blank faces in the room or that feeling that kids aren’t getting it. Then I ask random

questions to hone in on what the actual issue is.” There were two instances of planned

formative questions: Participant 2, who stated that he often prepared several formative

questions to ask students about the days’ main objectives; and Participant 4, who
102
regularly displayed formative questions on PowerPoint slides to check whether or not

students understood the new concepts taught during a lesson.

How participants implemented formative questioning in their classes also varied.

In almost all cases, participants posed a formative question to the whole class, a single

student to several students gave the answers without being called upon, and then the

participant acknowledged if the student(s) was correct or incorrect. This student

feedback elicitation process during formative assessment is discussed in more detail in

the next section.

Theme 2: The Feedback Cycle

The second theme that emerged from the data was the feedback cycle. This

theme encompassed both feedback about student understanding elicited by the teacher

and teacher use of the student feedback from formative assessment to make instructional

decisions. Therefore, the feedback cycle theme connected with RQ1 and RQ2.

Subcategories included student feedback to the teacher and teacher use of student

feedback. Discussed within the subcategories are how student feedback was elicited,

types of instructional adjustments used, how instructional adjustments were implemented,

and rechecking student understanding after adjustments.

Student feedback to teacher. Participants at Hammond collected very little

feedback about student understanding from the warm-ups and exit slips because of how

they implemented these formative assessment strategies. Data showed that participants

might not realize their low elicitation about student understanding because they often

indicated, in their interviews and logs, that they checked all students’ understandings
103
during formative assessment. This was clearly not the case during observations where

participants predominantly assessed only a few students throughout the class session.

Because student feedback from warm-ups and exit slips were discussed

previously under the implementation theme, this section primarily focuses on student

feedback elicited from formative questioning. Observational data showed that in every

participant’s classroom, students predominantly did not raise their hands to volunteer

answers to teachers’ questions. Participants rarely called on students; students were only

chosen to answer a formative question when no one responded or, in a few instances,

when a student was not paying attention in class. Data showed that the norm of allowing

students to give answers to teacher formative questions without first being selected had

several consequences on student feedback. First, wait time was affected. When students

who knew the answer stated it aloud, there was virtually no wait time for other students

to process information. Duckor (2014) also found wait time, which was important to

allow students the opportunity to think about questions and to formulate their answers,

was lacking in classrooms. Secondly, many of the same students called out the answers

throughout the class sessions. On average, four to six students answered all the formative

questions in each class. This finding corresponds to studies conducted by Helf (2015)

and Wiliam and Leahy (2015) who found that only a small number of students offer most

of the answers in class. Wiliam (2014) stated that studies showed only 25% of students

regularly answered questions in class. My study data showed that an average of 23% of

students at Hammond answered questions in each class (approximately five students in

each class answered all the formative questions; the average class size was 22 students).
104
Finally, as with the warm-ups and exit slips, the way participants implemented formative

assessment did not give all students an opportunity to provide feedback; consequently,

most student understanding remained unchecked.

Data from observations showed that participants did not use instructional

techniques that allowed all students to show their understanding when they implemented

formative assessment. Johnson et al. (2013) found that teachers in effective urban

schools consistently used strategies that allowed them to collect feedback from all

students, not just a few. In this study, however, participants used strategies that only

allowed them to collect feedback from a few students. For example, when participants

asked a formative question to the class, they waited until a student called out an answer.

A few students stating answers aloud resulted in little student understanding being

checked throughout the class session. This finding aligns with Haydon, Marsicano, and

Scott’s (2013) research which found, “Teachers typically ask students to volunteer and

answer questions one at a time. As a result, most instruction involves a few students

verbally responding to teacher questions” (p. 182). Furthermore, the same few students

answered questions throughout the class sessions while the rest of the class remained

passive learners. These students often volunteered most of the answers in their other

classes as well. For example, Participant 2 and 10 shared many of the same students in

their classes. The few students who answered most of the questions in Participant 4’s

class were the same few who answered most of the questions in Participant 10’s class.

Research shows that techniques such as hand-raising with random calling, whiteboards,

clickers, hand signals, and response cards all give students opportunities to show their
105
learning during formative assessment (Messenger et al., 2017; Nagro et al., 2016).

Participants did not implement any of these whole group elicitation techniques during

observations, did not mention them during the interviews, and did not record them in

their logs. The one exception was Participant 5 who, at one point during the lesson,

asked students for a “thumbs up” if they understood and a “thumbs down” if they did not

understand. Only about 25% of the students participated (mostly “thumbs up”), so this

formative assessment strategy, as implemented, was ineffective at revealing class

understanding.

When teachers are not effectively using techniques to give all students an

opportunity to show their understanding during a formative assessment, they may

interpret a few right answers as an indication that all students understand. This incorrect

interpretation is illustrated in an interaction with students and Participant 2 during a

lesson on probability. Participant 2 asked the whole class, “What is the chance of getting

one head when you flip a coin?” About half the students answered aloud, “50%” and the

participant said, “Good.” The participant then inquired, “What about getting two heads

in a row?” One student eventually called out the answer, and he was correct. Participant

2 replied, “Good,” and asked the class, “What about three heads in a row?” The same

student answered after a few seconds, and the participant said, “Great!” Then he gave

students a question on the board, “What is the chance of getting four right in a row if you

guessed on a multiple-choice test?” One student immediately asked for help, and the

participant spent several minutes assisting him. Seven out of seventeen students were not

working on the question: they were talking to another student, just sitting passively, or
106
were on their cell phones. After approximately 3 minutes, the participant asked, “Who

can give me the answer to this?” and two students said the answer aloud. The participant

then inquired, “So far are we getting it?” Two kids said “yes”; the rest of the class

remained silent. The participant moved on to the next part of the lesson. I recorded

similar student-teacher interactions in all classes.

Another common formative assessment practice was for participants to ask the

class, as a whole, if they understood. An example of this interaction was when

Participant 4, after explaining about the stock market crash before the Depression, asked,

“Any questions on this?” One student said “no” aloud and Participant 4 continued the

next part of the lesson. On another occasion, two students gave an answer to “What

makes a healthy economy?” Participant 4 then inquired, “Any questions on this? Make

sense?” The class was silent. Then Participant 4 said “good” and displayed the exit slip

question. Similar interactions were observed in most participant classrooms.

Data showed that there were inconsistencies between what participants said they

believed during the interviews—that all students should be checked for understanding—

and their current practices. One possible reason for the discrepancy could be that

participants did not know or have not tried techniques that support eliciting a greater

number of student responses during formative assessment. Gathering feedback about all

student understanding is extremely important because it results in the teacher being able

to make better decisions about how to adjust instruction to help bridge the gap between

what students currently understand and the targeted learning goals (Chan et al., 2014;

Johnson et al., 2013; Wiliam, 2014).


107
Teacher use of student feedback. In the formative assessment process, once

teachers have collected student feedback, they must interpret what the feedback indicates

about student learning and then make appropriate instructional adjustments to promote

further learning (Duckor, 2014). During the interviews, all 10 participants reported that

they adjusted their instruction when feedback from a formative assessment strategy

showed students did not understand a concept. The main way participants said they

adjusted their instruction was by stopping the current lesson to address the misunderstood

content. All participants consistently used one or more of these words to explain what

they did after collecting feedback from a formative assessment: “reteach,” “rephrase,”

“re-explain,” “go over again,” or “repeat.” Seven participants specifically stated that they

gave more examples, used different wording, provided analogies, or used a different

mode of instruction (i.e., showing a video or asking another student to explain). Half of

the participants (2, 4, 5, 9, and 10) mentioned using student feedback to plan or to adjust

their future lessons. Participant 4 stated, “When I see a lot of kids get the same things

wrong, I will put it in the bell-ringer the next day or put into the next assignment.”

Participants 2 and 9 said that they often created a new assignment the next day that

addressed what students did not understand. In contrast, Participant 6 admitted,

“Theoretically, I would change what I do the next day based on feedback, but I don’t

think I am there yet. I do not use them [formative assessment] for that [modifying the

next lesson] yet.”

Observational data were consistent with the interview data. All participants

demonstrated that they adjusted instruction during class by addressing misunderstandings


108
after giving the formative assessment task. Participants 1, 6, and 10 made the most

instructional adjustments during class, and Participants 3 and 7 used the least. The

number of instructional adjustments directly corresponded with the number of formative

assessment strategies implemented by the participants. The more formative assessment

strategies a teacher implemented meant more opportunities for adjusting instruction to

address student learning needs. The most common way participants adjusted their

instruction was by stopping instruction to re-explain a concept, usually in a different way.

Participants modified instruction primarily for the whole class, not smaller groups of

students or individuals. This finding agrees with Andersson and Palm (2017) who also

found that the most common way teachers modified instruction was to the entire group.

The participants in this study addressed student misunderstandings with the whole class

by implementing four main instructional strategies: (a) giving verbal explanations using

different words, examples, or analogies; (b) using manipulatives or visuals; (c) using

guided instruction to explicitly help students understand a concept or process; and (d)

completing additional practice problems with students.

One instructional adjustment participants used warrants a further discussion. In

six classes participants decided to adjust their instruction by modeling additional practice

problems for students. Participants teaching Spanish, Geometry, Financial Management,

Economics, Pre-Calculus, and Chemistry classes completed extra practice problems on

the board while students either watched or followed along on their papers. While

completing a practice problem, participants often discussed their thought processes and

asked several low-level formative questions. Many students did not pay attention while
109
participants explained how to work through a problem, and after participants completed

an example, students often copied the work. In many classes, usually at the students’

requests, participants completed several extra practice problems in this manner. Student

confusion was often evident during independent work time, even after the teacher

reviewed several practice problems. Consequently, participants often completed several

of the independent assignment problems with students as well. The participants

completed practice problems without stopping to formatively assess student

understanding. In most classes, students relied on the participant to do the work and then

passively copied, an approach that does not support increased student understanding. The

exception was Participant 8 who requested that students work on practice problems on

their own to assess what they could accomplish without teacher help. Participant 8 then

walked around and checked several students’ papers before giving the answers to the

practice.

Data from logs gave a deeper understanding of how participants used student

feedback from formative assessment to adjust instruction. Log data showed participants

implemented 63 formative assessment tasks, averaging two per participant per class.

There were 31 instances where participants recorded no instructional adjustments as a

result of their formative assessment. They recorded that feedback from the formative

assessment indicated that students understood the lesson concepts. Conversely, there

were 32 instances where participants indicated that feedback showed students did not

understand the lesson concepts, and, therefore, they adjusted their instruction. Eight

participants stated they retaught or re-explained a misunderstood concept, six participants


110
gave more examples, and six participants performed more practice problems with the

students. Participants made these adjustments during the same class session they gave

the formative assessment task. Only Participant 4 and 8 specifically mentioned

incorporating student feedback into the next day’s lesson or warm-up. Participants 1, 8,

and 9 acknowledged that they planned to give students extra time to learn the concepts

during the unit but did not give any specifics as to when or how they would do so.

Overall findings indicated that participants used student feedback from formative

assessment regularly to immediately adjust instruction for the whole class to address

misunderstandings. Participants only occasionally used student feedback to adjust future

lessons, and rarely, if ever, used the feedback to make adjustments for small groups of

students.

Data showed that after participants retaught, re-explained, or completed another

practice problem, they either did not recheck student understanding, or they checked

student understanding by directly asking students if they understood. During

observations, participants often asked, “Do you understand now?” or “Do you get what I

am talking about?” Six participants asked similar questions to individual students. In

almost every case, the student responded “yes.” Only one student (in Participant 10’s

class) answered, “No, not really.” Participant 5 stated during the interview that when she

asks for a show of hands about who understands, a few students will participate. When

she asks who does not understand, usually no one will raise their hand. Fisher and Frey

(2014a) also found this to be true in their studies. They uncovered that when teachers ask

students if they understand, students often say yes or nothing at all because they are too
111
embarrassed to say they do not understand; they also might not even realize they do not

understand.

Participants also often directed the general question, “Does everyone

understand?” to the whole class after they taught a concept. In every instance, there was

anywhere from one to a few students who would shout “yes” immediately. The

participants replied “good” or “okay” and continued with the lesson. Participant 6

commented on the process, “I have trouble figuring out if anyone needs more help. I may

ask ‘Do we need to go back over this?’ and I will hear a few say ‘no’ and the rest say

nothing.” She continued by adding, “I might let them go into independent practice too

fast because I feel that everyone understands when they really don’t.” Likewise,

Participants 2, 4, and 7 admitted that they often do not know if students understand a

concept better after they retaught it to the class.

Theme 3: Knowledge and Beliefs

Knowledge and beliefs about formative assessment was another theme that

emerged from the study and included two subcategories: understanding formative

assessment and beliefs and perceptions of formative assessment. The subcategories

consist of defining formative assessment, knowledge and use of formative assessment,

formative assessment use with students, and benefits of formative assessment. The theme

corresponded with RQ3. Data about teachers’ understanding of formative assessment

and their perceptions of its use showed what factors affected formative assessment use at

Hammond.
112
Understanding formative assessment. All but Participant 3 gave at least a

partial definition of formative assessment. One important component of the formative

assessment definition is that it is used regularly in classrooms. Only Participant 1 gave a

similar word in her definition by stating that formative assessment is used “continuously”

during class to assess students. The other participants provided no reference in their

definition about the frequency of formative assessment implementation; however, later in

their interviews five participants mentioned that formative assessment was usually

implemented daily. The second main component of the formative assessment definition

is that it is used during the learning process as opposed to after learning took place. Only

three participants gave indications that they viewed formative assessment as a process to

use while students were learning. Both Participants 2 and 10 used the phrase “in-the-

moment” assessment in their definitions, and Participant 4 said “checking along the

way.” Conversely, assessment given after learning is called summative. Studies from

OECD (2013) showed that teacher confusion about the difference between formative and

summative assessment might contribute to the inconsistent implementation of formative

assessment. Eight participants correctly stated that summative assessment is used to

determine what students know at the end of a unit/chapter/semester; in other words, after

learning took place. The participants provided examples such as a test, final exam, or

final project. Therefore, there was no indication of any confusion about the two types of

assessment, so this does not seem to be a hindering factor in their formative assessment

implementation. On the contrary, several studies have shown that teachers often do not
113
understand the difference between formative and summative assessment (Clark, 2012a),

which may affect their implementation (OECD, 2013).

The third part of the definition is that formative assessment is used to gather

information about or to check on student understanding. Nine participants stated that

formative assessment was used to “check” students; however, answers varied about what

they thought teachers were checking. For example, Participant 1 thought formative

assessment was used to check if students were “improving and getting better,” Participant

2 said it was used to check on student “deficiencies,” Participant 8 stated formative

assessment was used to check if students were “ready for a test,” and Participant 9

thought it was a way to check if students “retained” what they learned. Only Participant

10 specifically said formative assessment was a way to check for “student

understanding.”

The fourth component of formative assessment states that student feedback is

used to adjust teaching. Miranda and Hermann (2015) declared that formative

assessment needs to be used to modify instruction to help students meet learning goals.

Without instructional adjustments, the formative assessment process is not complete.

Only Participant 6 mentioned that formative assessment is used to “change or alter”

instruction in their definition. The remaining participants made no mention of this

important component when defining formative assessment. My findings agreed with

OECD’s (2013) research which showed that teachers often do not recognize formative

assessment as a practice that can be used to assist teacher instruction.


114
Participants could provide examples of formative assessment strategies to check

for student understanding, but knowledge of strategies was limited. There are many

different formative assessment strategies teachers can use to check for student

understanding (see Appendix F for List of Possible Formative Assessment Strategies).

Only five different formative assessment strategies were commonly stated, even when

participants were reminded that their examples were not limited to ones they used. The

most frequently given formative assessment strategies were questioning (ten

participants), warm-ups (eight participants), quizzes (eight participants), exit slips (six

participants), and talking to individual students (five participants). These five formative

assessment strategies were the same strategies participants commonly implemented

during observations. Participants acknowledged their awareness of formative assessment

strategies were limited. When questioned about how satisfied they were with their

current knowledge of formative assessment to check for student understanding,

participants’ answers included “a good understanding, but feel I could learn more”

(Participant 1), “I could learn more” (Participant 3), “decent knowledge, but I usually use

the same ones [strategies]” (Participant 5), and “I am still learning” (Participant 9).

Participants gave similar answers about their knowledge of using feedback collected from

formative assessment to adjust their instruction: “Knowledge is fair, but I could always

learn more” (Participant 1) and “OK, but open to learning more” (Participant 3).

Beliefs and perceptions of formative assessment. Teacher beliefs and

perceptions about formative assessment can also play an important role in their

implementation of formative assessment strategies (Yao, 2015). Participants


115
unanimously believed that formative assessment should be used to check the

understanding of every student every day at least several times throughout the class.

Their answers, however, did not agree with other data collected in this study. Several

participants’ answers exposed a reason why formative assessment beliefs and practices

may not be aligned. Participant 8 commented, “Well, in theory, (chuckling) I think

everybody should be checked; probably every day some kind of check during the hour.

But that’s not reality.” When probed about the latter phrase, Participant 8 added, “Lack

of participation is a big problem. I can’t determine what they [students] are

understanding if they don’t give me anything.” Similarly, Participant 1 stated, “All

students should be checked, but the problem is that many students will not participate.”

This sentiment, that the lack of student participation during formative assessment tasks

kept teachers from assessing all of their students, was reiterated by all but one participant

during the interviews. Observational data confirmed a lack of student participation

during formative tasks in all participants’ classrooms. Student participation during

formative assessment tasks at Hammond is discussed further in the barriers and supports

theme of this section.

All participants believed that regularly using formative assessment to check for

student understanding was beneficial. Participants cited several benefits they perceived

were the result of consistently using formative assessment in the classroom: (a) to know

how well students understood concepts to determine who needed help (nine participants),

(b) to address student misunderstandings (three participants), (c) to adjust instruction

(three participants), and (d) to have students monitor their progress toward a learning goal
116
(three participants). Participant 10 added that formative assessment strategies were a

beneficial way to help a teacher check student understanding quickly. Participants 2, 4,

and 5 also mentioned the quick nature of formative assessment later in the interview.

Participant 2 was the only one who stated that a benefit of formative assessment was

better student learning in the classroom. Lastly, only Participant 5 voiced that formative

assessment was beneficial to uncover student understanding prior to a new lesson. Data

from observations and teacher logs showed that using formative assessment to check

student background knowledge before learning a new concept was not commonly used at

Hammond.

Theme 4: Barriers and Supports

The final theme, barriers and supports, gave insight into what circumstances

might hinder or promote teacher formative assessment use to check for student

understanding and to adjust instruction. The theme connected with RQ3. The barrier

component of the theme was divided into two subcategories: barriers to implementing

formative assessment to check for understanding and barriers to using feedback from

formative assessment to adjust instruction. Perceived barriers for implementation

included student participation during formative assessment tasks and teacher time.

Barriers to adjusting instruction were lack of feedback about student understanding, time,

and teacher knowledge. The support component of the theme consisted of student

participation during formative assessment tasks, administrative support (which included

schoolwide initiatives), collaboration time, and professional development.


117
Barriers to implementing formative assessment to check for understanding.

Participants unanimously stated a lack of student participation during formative

assessment tasks, especially formative questioning, was a barrier to their formative

assessment implementation. During the interviews, participants showed frustration when

they discussed student participation during formative assessment. Participant 4

acknowledged that due to low formative assessment participation, “It is hard to know

what the students know and don’t know. The vocal students, I know where they are at.

The rest, not so much.” Likewise, Participant 9 disclosed, “The majority of the class is

not involved [during formative assessment]. So, I don’t really know what is going on

with everyone, and I don’t have the time to walk around to each student and have a

discussion with them.” Most of the participants who mentioned lack of participation also

mentioned the frustration of having the same few students in each class answering

questions. Observational data corroborated this statement; the problem seemed to stem

from students calling out answers without being called upon. Participant 1 stated,

“Sometimes, because the same students will be answering all the time, I feel like students

are really getting it. Then when I check an assignment or something, I realize they

aren’t.” Once again, this misinterpretation of responses connects to Duckor and

Holmberg’s (2017) idea of false feedback. Similarly, Participant 6 communicated, “I try

to ask often and get some feedback on how they [students] are understanding. But, like

for whole group questioning, I feel that I am just talking to five kids each day.”

Participants’ comments during the interviews seemed to suggest they felt the problem

with student participation during formative assessment was out of their control. The
118
perceived lack of control was also evident in participants’ body language—much

sideways head shaking and shoulder shrugs while they discussed student participation

problems. Participant 7 said, “You can’t know what a student knows or even have good

questioning in class if half the class just doesn’t care to even participate.” Participant 4

stated that his students are just “sitting there” during formative assessment tasks, and

there is no way for him to know if they understand or not. Similarly, Participant 9

affirmed, “I wish I could do more with formative assessment, but it’s hard to get the

students on board with me.” Several participants remarked that instead of participating

during formative assessment, students are frequently not paying attention. They

mentioned students often have their heads down, are on their cell phones, or are talking to

other students. Observational data confirmed that in most classes these student behaviors

were repeatedly displayed during formative assessment tasks. Student participation, as it

relates to formative assessment, was the most commented on issue; it was mentioned 32

times by participants during the interviews.

The second most common barrier to formative assessment implementation was

time. Participants stated several different factors that influenced their time implementing

formative assessment, the most frequent being classroom management and behavior

problems. Eight participants stated that much of their time was focused on student

disruptions, and the time spent on classroom management affected how often they chose

to use formative assessment strategies. For example, Participant 2 revealed, “I have to

spend so much time on student behavior problems that it takes away time to do formative

assessment checks.” Participant 6 frustratingly explained, “Student behavior problems


119
limit me from taking extra time to check for understanding. Sometimes I am at the point

of saying, ‘You just do it on your own then—I hope you were listening.’” Emmer and

Sabornie (2015) found that student participation and behavior problems are related—the

less student participation there is in class, the more likely students will show off-task

behaviors. Six participants mentioned other time-related barriers that affected their

formative assessment use: (a) curriculum pacing, (b) having to prepare for many different

classes, (c) classroom time management, and (d) being a new teacher to the building.

Participants 6, 7, 8, and 9, who were new at Hammond this school year, each admitted

they were pressed for time. Participant 6 confessed, “I am just so overwhelmed with

everything I need to do with my teaching that I am just trying to stay one day ahead of

everything. . . . But next year I am hoping that I can focus more on formative

assessment.”

Participants also mentioned the time it took to evaluate formative assessment

tasks was a hindrance. Most participants admitted they did not have time to assess warm-

ups or exit slips for student understanding, which is why they did not collect them daily.

Participant 4 stated, “I don’t want to collect formative assessment every day because to

collect and read all that data, especially more intense answers than a yes or no, take too

much time.” Participants 1 and 4 acknowledged that if they learned strategies that

allowed them to collect formative feedback more quickly and easily, they would use

formative assessment more often. Perrota and Whitelock (2017) found that using

technology for formative assessment can “easily and effectively” help teachers check for

student understanding (p. 131). Three participants felt that technology might help them
120
more quickly implement formative assessment, but there were several obstacles, one

being time. Participant 2 commented, “I don't really use any technology, haven't really

had time to work with that—too busy, but I know there are a lot of ways that tech can be

useful for formative assessments.” Participant 7 also mentioned a lack of time to learn

new technologies that could help them implement formative assessment. Other barriers

about technology to assess student understanding included not knowing what technology

for formative assessment is available, not knowing how to incorporate formative

assessment and technology into lessons, and not yet having technology available to them

because they were new to the building.

Barriers to using formative assessment feedback to adjust instruction.

Participants revealed that the main barrier to using student feedback to adjust instruction

was that they often were still not sure what students did or did not understand; therefore,

they were unsure how to adjust instruction to meet student needs. This answer connected

to low student participation during formative assessment tasks and participants’ lack of

eliciting adequate student feedback to determine current levels of understanding. Six

participants cited that another barrier to adjusting instruction during class was time spent

on student disruptions. This response aligned with participants’ reasoning for not

implementing formative assessment with more fidelity. Participant 8, discussing the

connection between behavior problems and adjusting instruction, stated:

I feel like I have to keep going on with the lesson. I can’t take the time I may

want or the time students need to reteach because many can’t pay attention to
121
what I am saying. Also, the kids that don’t need the reteaching will start to act

out during the downtime and then I can’t get them back.

Two participants mentioned that they often did not know how to properly adjust

instruction to help students meet learning goals. Participant 3 offered, “I don’t always

know how to reteach something or how I could help address what they [students] don’t

understand.” Participant 7 said he knew the standard way to address misunderstandings,

such as re-explaining, but a challenge was learning newer ways to help students

understand content such as “flipped classrooms or other innovative teaching techniques

using technology that I have read about but haven’t tried.” Miranda and Hermann (2015)

and Wylie and Lyon (2015) also found that teachers struggle with knowing how to make

instructional adjustments based on student feedback. When adjusting instruction,

participants often did not do so in way that helped advance student understanding, such

as having students watch participants complete extra practice problems instead of

participating in the process. Participants also, by not rechecking student understanding,

did not determine if their instructional adjustments were beneficial.

Study data showed that knowledge about formative assessment was also a barrier

for participants. Participants were only able to partially define formative assessment, and

most admitted that their knowledge of formative assessment to check for student

understanding and to adjust instruction was somewhat lacking. Participants used a

limited number of formative assessment strategies, and they lacked understanding about

how to gain adequate student feedback needed to make sound instructional adjustments.

Although participants believed that all students should be formatively assessed


122
throughout class daily, none were observed doing so. Box et al. (2015) suggested when

teacher beliefs and practices do not align it was often due to lack of knowledge in the

area of practice. Also, the emphasis on the lack of student participation during formative

assessment tasks and resulting frustration participants showed suggested a lack of

knowledge about instructional strategies that would allow them to invite all students to

participate during formative assessment tasks.

Supports. All participants mentioned that if more students participated during

formative assessment, then they would implement formative assessment more often.

Participants also offered that they would increase their formative assessment use if they

had fewer behavior problems in their classroom. Three participants discussed schoolwide

support needed for behavior problems, such as more consequences for disruptive

students; however, most admitted that improving their classroom management practices

would allow them to check for and address student understanding better. Demographic

data showed that four participants had four or fewer years teaching experience and two

were new to teaching high school, so they may still be establishing their classroom

management styles. Nine participants cited that schoolwide initiatives would (and in

some cases have already) cause them to use formative assessment more often. Since the

beginning of the school year, leaders at Hammond requested that teachers use warm-ups

and exit slips regularly in their classrooms. Observational data showed that nine

participants implemented warm-ups and six participants implemented exit slips, and all

participants recorded a warm-up and exit slip at least once in their logs. Participants 3

and 4 said that that administration’s focus on these strategies caused them to incorporate
123
warm-ups in class daily. Participants 2, 9, and 10 mentioned that another benefit of

leaders implementing schoolwide initiatives is that students might participate more

during formative assessment tasks if all teachers are using them consistently. Participant

9 stated, “If you are the only classroom doing something, then sometimes it’s hard to get

students to do [the formative task]. But if all the teachers are doing it, they are more

willing to do it.”

All participants commented that school leaders could support formative

assessment use by providing time for teachers to collaborate during professional

development or school meetings. Participants said they would like to work with other

teachers to (a) learn new formative assessment strategies, especially ones that they can

give and assess quickly; (b) discuss which strategies are working for other teachers and

share ideas; (c) learn about using technology to support formative assessment; and (d)

create formative assessment tasks for current lessons. All participants also acknowledged

that professional development would improve their formative assessment use. Interview

data showed participants were willing to learn more about this instructional practice.

Brink and Bartz (2017) found that professional development on formative assessment has

a positive influence on teacher implementation. Participant 10, who used formative

assessment the most consistently and purposefully of the participants, stated:

Having training helps. I went to a PD one time a few years back about formative

assessment given by a teacher that used it and was passionate about it and realized

that I could be doing this in my classroom—checking for student understanding


124
more often. After that, I changed my teaching to accommodate more time for

formative assessment. It made a big difference.

Participants felt that learning new formative assessment skills and strategies during

professional development would be valuable to support their implementation. Participant

6 added that it was important for presenters to teach research-based strategies during

professional development: “I want tried and true things that people give me that work or

have data that back it up. I want to know if I do something, it works.” Based on study

data, professional development appeared to be a logical choice for a project aimed to

support formative assessment use at Hammond.

Interpretation of the Findings

The purpose of this project study was to examine how teachers use formative

assessment to check for student understanding and to adjust instruction. I analyzed data

from observations, interviews, and teacher logs with the study’s purpose and research

questions in mind. Four themes emerged from the data: implementation, the feedback

cycle, knowledge and beliefs, and barriers and supports. The findings revealed

information about teacher formative assessment practices that align with previous

research as well as revealed specific implementation concerns at Hammond.

Findings showed that participants’ perceptions and practices were not fully

aligned. Data showed that participants had a basic knowledge of formative assessment

and perceived many benefits of its regular use. Studies have found that formative

assessment implementation can be affected when teachers do not fully understand what is

meant by formative assessment (Chan et al., 2014; Chappuis, 2015). Similarly, OECD
125
(2103) found that teachers do not successfully implement formative assessment in their

classrooms when they do not understand what makes a task formative. Because teachers

make their own meaning of the words ‘formative assessment,’ their interpretation can

affect implementation; therefore, I asked participants to define formative assessment.

The definition of formative assessment, based on research in the literature review, is a

process in which classroom tasks, planned or unplanned, are used regularly during the

learning process to provide feedback about students’ current levels of understanding so

that teaching and learning can be modified to address any gaps in learning and to improve

student achievement (Black & Wiliam, 1998b; CCSSO, 2008; Chappuis, 2015; Clark,

2012b; Stiggins & Dufour, 2009). The definition has four main components that are

important to understanding the formative assessment process against which participant

responses were compared. During the interviews, participants provided several main

components of the formative assessment definition and cited several examples of

strategies that they used to check for student understanding. Participants also stated that

they believed teachers should formatively assess all their students several times

throughout each class session. Even though most participants were observed delivering

at least two formative assessment tasks during a class session, they did not implement the

strategies in a manner that gave all students an opportunity to show their understanding.

Instead of implementing formative assessment to check how well all students understood

learning goals, participants predominately collected feedback from only a few students.

Participants acknowledged that they only collected a limited amount of student feedback;

they felt student participation during formative assessment was a barrier to collecting
126
more responses. Participants also indicated that they needed or wanted to learn more

about implementing formative assessment in their classroom. Study findings, therefore,

suggested that participants might benefit from learning formative assessment strategies

that would allow them to consistently check for student understanding and to collect

feedback from more students.

Clearly written and shared learning targets are essential to the formative

assessment process. Participants did not consistently construct or communicate clear

learning targets for their lessons. Stefl-Mabry (2018) recommended that any formative

assessment “should be designed to collect information related to a targeted learning

objective” (p. 55). Additionally, Brookhart and Moss (2014) stated that students are

“flying blind” if only the teacher knows the learning goals; in classrooms where teachers

shared the learning goals with students, the interaction “made all the difference” (p. 28).

Therefore, having clear learning targets is integral to the formative assessment process

because students will know what they are required to learn, and teachers can elicit

feedback that will help determine their level of understanding (Fisher & Frey, 2014a;

Tomlinson, 2014; Wiliam, 2018).

Student feedback plays a critical role in formative assessment (Wiliam, 2014).

The feedback cycle involves teachers using a formative assessment strategy to elicit

feedback about student understanding and then using the student feedback to determine if

adjustments are needed to support student learning. How teachers implement the

formative assessment strategy can determine the amount and quality of student feedback

the teacher collects (Fisher & Frey, 2014a). Once teachers collect feedback about student
127
understanding, they can respond by using the information to make instructional decisions

about how to proceed (Chappuis, 2015). Teachers might find that feedback shows

students understand what is being taught, in which case teachers can continue with the

lesson. If feedback shows students do not understand, then teachers can make the

necessary instructional adjustments to help clarify student misunderstandings. After

teachers have adjusted their instruction, they should assess student understanding again

(Fisher & Frey, 2014a). The feedback cycle continues in this manner with each

formative assessment strategy the teacher uses throughout the class session.

Participants inconsistently implemented formative assessment strategies to collect

feedback about students’ current levels of understanding. Review of the observational,

interview, and log data showed that three main formative assessment strategies were used

regularly by participants at Hammond: warm-ups at the beginning of class; exit slips at

the end of class; and formative questioning used throughout instruction. Data showed,

however, that even though participants periodically implemented formative assessment

strategies during class, they either (a) did not collect any feedback about student

understanding, or (b) they collected feedback from a limited number of students.

Consequently, Fisher and Frey’s (2014a) conclusion is supported in my study: teachers

do not check how well most of their students understood what was taught during class.

For example, participants did not collect feedback from warm-ups and exit slips to assess

student understanding of the lessons’ learning goals. Participants gave warm-ups with

the intention for students to complete and correct them on their own, not as a tool for

teachers to check for student understanding so that they could make informed
128
instructional adjustments. Likewise, participants did not implement exit slips in a way

that allowed them to check for student understanding. Cornelius (2014) specified that

exit slips were designed for teachers to collect formative feedback at the end of class so

that teachers could quickly review which students understood the learning goals and

which did not. Therefore, during both the warm-up and exit slip tasks, participants

missed an opportunity to review student feedback to determine current levels of

understanding. Without gathering and analyzing student feedback, teachers cannot make

necessary instructional adjustments to help close gaps in student understanding.

Collection of student feedback was further inhibited when students were

permitted to call out answers after participants asked formative questions to the class.

Allowing students who knew the answer to immediately state it aloud meant most

students were rarely able to demonstrate their understanding. Consequentially, as I

witnessed during observations, the same few students answered most of the formative

questions throughout the class sessions. Duckor and Holmberg (2017) also found that

teachers typically only gather feedback from a few of their students. Having a small

sample of student understanding in their class does not allow teachers to recognize what

most students do or do not understand. When participants predominantly gather feedback

about student understanding from only a few students in class, they may reach incorrect

conclusions about student learning. Duckor (2014) and Wiliam (2017) found that

teachers often incorrectly conclude that if a few students give correct answers, all

students understand. They warned that this assumption can be problematic because it can

lead to a false sense of student understanding based on limited feedback. Assuming


129
feedback from a few students represents all students can also lead to missed opportunities

to address misunderstandings, possibly those of most of the class.

Formative questioning, the most implemented formative assessment strategy, was

found to be mostly unplanned and convergent, or low-level. Planning of formative

questions and the types of questions asked can affect the amount and quality of student

feedback teachers collect. Marshall and Smart (2013) found that the quality of teachers’

formative questions increased when they planned formative questions ahead of time,

allowing for them to better reveal students’ current levels of understanding. However,

my findings show, like Wiliam (2014), that most teachers do not plan their questions to

check for student understanding. Wiliam (2014) warned that if formative questions are

not planned, then teachers may wrongly conclude that “students are on the right track

when, in fact, their understanding of the subject is quite different from what they

[teachers] intend” (p. 17). Planning also allows teachers to be intentional about their

formative questioning. Teachers can determine which concepts during the lesson

students are likely to have misconceptions about and design questions to uncover student

thinking.

The level of questions teachers ask can determine how much information about

student learning they gather. Duckor and Holmberg (2017) asserted that with low-level

questioning “the teacher is working for a predetermined response. A response. One. Not

the wide range of responses formative assessors need in order to make valid, sound

inferences on the current levels of understanding to meet students where they are” (p.

170). Although both low-level and high-level questions are needed to determine student
130
understanding, high-level questions give more insight into student thinking and how

instruction should be adjusted. Findings suggested that hearing correct answers to low-

level formative questions also lead participants to conclude that students understand

concepts. Again, this assumption could result in missed opportunities to address student

misunderstandings. High-level questions, also called divergent questions, should be

planned to help ensure students have a deep understanding of concepts. Increased

understanding can lead to increased achievement on classroom, district, and state-level

assessments.

Findings also showed that there was consistently low student participation during

formative assessment. Participants did not offer all students opportunities to participate

during formative assessment so that students could demonstrate their understanding. The

way teachers implement formative assessment can allow them to elicit a large or a small

amount of student feedback. Johnson et al. (2013) stated that, especially in low-

performing urban schools like Hammond, teachers need to collect feedback from most

students to develop an accurate picture of student understanding. Although formative

assessment permits a teacher to assess what students understand, the opportunity to

understand is “only available when students are empowered to participate” (Sezen-Barrie

& Kelly, 2017, p. 208). Findings, however, showed that participants recognized that

most of their students were not participating during formative assessment. During the

interviews, participants openly expressed frustration about how low student participation

resulted in little feedback about student understanding which, subsequently, made it

difficult to determine how well the class understood the learning goals. In fact, all but
131
one participant stated that his or her main barrier to the implementation of formative

assessment was that most of the students did not participate, especially during formative

questioning. Although no instructional technique can ensure students participate during

formative assessment, teachers can purposefully implement formative assessment

strategies to give more students opportunities to show their understanding. Duckor

(2014), Fisher and Frey (2014a), and Helf (2015) also found that teachers did not often

provide opportunities for students to respond in class when they attempted to elicit

information about student understanding. Instead, the researchers noted that teachers

relied on the traditional technique of asking a question to the class, having one student

answer, and then giving feedback to that student about his answer. Furthermore, Haydon

et al. (2013) reported that students in low socioeconomic schools, such as Hammond, are

given fewer opportunities to respond in class than in other schools.

The study findings connected to the problem statement, research questions, and

conceptual framework. School leaders at Hammond were concerned about teachers’

inconsistent formative assessment practices to check for student understanding and to

adjust instruction. Leaders recognized that formative assessment implementation at their

school might be related to student achievement problems because of students not

understanding curricular concepts taught in their classes. Three research questions were

developed to provide information about teachers’ formative assessment implementation

to help leaders make informed decisions to support consistent use of this practice.

Findings linked to the research questions can give leaders insights into how teachers use

formative assessment to check for student understanding and to adjust instruction. Study
132
findings showed that even though participants unanimously believed that all students’

understandings should be checked daily, their current implementation of formative

assessment showed that this was not the case. The strategies and techniques participants

used to implement formative assessment only allowed them to elicit feedback from a few

students; participants did not give all students an opportunity to show their understanding

during formative assessment. This finding directly related to RQ1. Overall, teachers

were not implementing formative assessment daily in a way that allowed them to assess

all their students’ understanding.

Using student feedback to adjust instruction is an important part of the formative

assessment process. Study findings connected to RQ2. Participants discussed and

demonstrated several ways that they adjusted their instruction based on student feedback

from formative assessment strategies. If student misunderstanding was not evident, the

participants continued with the lesson. If misunderstandings were evident, data showed

most of the participants stopped instruction, often to reteach or to complete more practice

problems with the whole class. When participants re-explained, they often did so how

they originally explained the content. McGlynn and Kelly (2017) discussed the

importance of teaching differently when adjusting instruction, advising that it is

“imperative” to reteach in a different way because if students did not understand the first

time, then reteaching in the same way “still won’t make sense to them” (p. 24).

Andersson and Palm (2017) also found that whole group was the most common way that

teachers adjusted instruction. Instructional adjustments, however, can only be

accomplished through the deliberate use of formative assessment to obtain a complete


133
picture of student understanding (Wiliam, 2014). Wiliam (2014) stated that teachers

should attempt to gather formative feedback from every student to properly plan the next

steps in instruction. Participants at Hammond did not elicit or collect formative feedback

from most students; therefore, they would not have the necessary information to make

informed instructional adjustments. Even though participants regularly adjusted

instruction to reteach a concept, doing so was based on the responses of only a few

students. Fisher and Frey (2014a) advised that student feedback should be collected after

the teacher adjusts instruction to once again determine if students understand the content.

Many participants in the study checked student understanding after making an

instructional adjustment by asking the whole class a question such as “Does everybody

understand now?” This question was met with either silence or a few students saying

“yes” aloud. The result was participants continuing with the lesson without knowing

most students’ current levels of understanding. Therefore, even though participants used

formative assessment strategies during class and often modified their instruction, they did

not determine if the instructional adjustments increased student understanding. Without

rechecking for student understanding, participants could not determine if additional

instructional adjustments were necessary.

RQ3 explored teachers’ perceptions of formative assessment to check for

understanding and to adjust instruction. Findings showed that several factors influenced

participant formative assessment implementation: student participation during the

formative assessment feedback cycle, teacher knowledge and beliefs about formative

assessment, and time. Answers connected to RQ3 may explain why participants’ beliefs
134
and practices did not align. Five participants stated that if they could elicit more student

participation during formative assessment tasks, then they would implement formative

assessment strategies with greater fidelity. These comments suggested participants

lacked the knowledge or skills to implement formative assessment in a manner that

would encourage student participation so that participants could collect more feedback

about student understanding. When the participants were probed further about strategies

they used to engage more students during formative assessment tasks, four participants

mentioned they had tried “thumbs up” or using popsicle sticks with names to call on

more students. They stated that these strategies were unsuccessful. The other participant

admitted, “I do not have any idea how to get more students to answer formative

questions. I just feel that until the students start to see the value of participating, there is

nothing I can do.” Data from observations further showed a lack of knowledge of

strategies and skills that would support more student participation during formative

assessment. Participants’ statements about supports including more teacher

collaboration, increased technology use, and targeted professional development gave

further insights into RQ3.

The study findings also connected to the study’s conceptual framework.

According to Black and Wiliam’s (1998b) formative assessment theory, student learning

can be intentionally enhanced when teachers collect formative feedback to determine

how well students are progressing toward a learning goal. Based on the social

development theory, which is closely tied to the formative assessment theory, students

need to be active during the formative assessment process (Vygotsky, 1978). Most
135
participants’ students, however, were not active during formative assessment. For

teachers to target students’ ZPDs, the mental space between what students know and

what they are working toward, they must determine what students currently understand.

Teachers can then adjust instruction to help bridge the gap between what students

currently understand and what students still need to understand (learning goals).

The goal of this study was to examine teachers’ formative assessment use to

check for understanding and to adjust instruction. The results of this study indicated a

gap in teacher understanding and implementation of formative assessment regarding

consistency and fidelity. Participants’ inconsistent implementation of formative

assessment strategies to check for student understanding from a limited number of

students may be contributing to Hammond’s achievement issues. Therefore, I developed

a targeted 3-day professional development with year-long sustained support in

professional learning communities (PLCs) to help teachers implement formative

assessment strategies that allow more students opportunities to show their understanding,

so teachers can collect adequate feedback about student learning. If teachers

intentionally collect adequate formative feedback, then they may determine what

instructional adjustments they need to help students meet learning goals, which may

increase overall student achievement.

Summary

Section 2 described the study’s methodology, including the research design,

research tradition, justification for the design, criteria for participant selection, access to

participants, and measures to protect the participants from harm. I included detailed
136
descriptions about the three data collection instruments, how the integrity of the data was

maintained, and the data analysis process and findings. The study research design was a

qualitative case study developed to gather information about how teachers at Hammond

used formative assessment to check for student understanding and to adjust instruction.

The qualitative tradition allowed for thick, rich descriptions needed to understand

teachers’ current formative assessment practices. I obtained consent from the 10

participants in this study, and confidentially was maintained. Data from observations,

interviews, and logs provided information about the three research questions. I used open

and axial coding to thoroughly analyze the data and identify four main themes:

implementation, the feedback cycle, knowledge and beliefs, and barriers and supports.

Findings gave insights into the three research questions. Participants checked for

understanding using three main strategies—warm-ups, exit slips, and formative

questioning. Participants implemented warm-ups and exit slips with all students,

however, they did not collect student responses to determine levels of understanding.

During formative questioning, the most frequently used formative assessment strategy,

participant predominantly used the Initiate-Response-Evaluate (IRE) method of

questioning, which only provided them with feedback from a few students. Therefore,

because of how teachers implemented formative questioning, most students’

understandings were left unchecked while they were learning new concepts. Without

giving all students opportunities to participate during formative assessment and collecting

adequate student feedback, participants could not determine what students did or did not

understand. Consequently, participants could not make informed instructional


137
adjustments to help students meet learning goals. By considering participants’ current

implementation practices and their perceived barriers and supports for implementing

formative assessment with more fidelity, I developed a project to address the findings.

Section 3 describes the project developed from study findings. The project is a

professional development that may help teachers consistently use formative assessment to

increase the number of students meeting learning goals by implementing instructional

strategies that may help them collect feedback about all students’ understanding. I

provided a rationale for the project choice and a review of literature supporting the

project genre and content. Details of the project are included such as an outline of goals,

timelines, materials, the implementation process, and an evaluation plan. Additionally, I

discussed possible social change implications resulting from the project along with the

project’s importance to Hammond stakeholders.


138
Section 3: The Project

Introduction

Study findings showed that participants collected formative assessment feedback

from a limited number of students, which affected their ability to make informed

decisions about how to adjust instruction. Teachers were not offering all students an

opportunity to show if they understood what was being taught. Not collecting feedback

from most students resulted in participants making instructional adjustments based on the

understanding of a few students. Fisher and Frey (2014a) and Wiliam (2013) asserted

that teachers should use strategies that allow them to formatively assess all students so

that they can obtain an accurate view of the current levels of understanding of their class

to make informed instructional decisions.

Data showed that student participation during formative assessment was a barrier

to participants’ formative assessment use. Participants felt that the lack of student

involvement during formative tasks inhibited their implementation of formative

assessment. Research shows that when teachers incorporate instructional strategies that

give students more opportunities to respond during formative assessment, student

participation increases (Duckor, 2014; Tincani & Twyman, 2016). The result can be

greater student feedback during formative assessment. Participants also reported that

improved student participation during formative assessment would encourage them to

implement formative assessment with greater fidelity.

In this section, I introduce my project which, as a result of my findings, consists

of professional development sessions that may help teachers elicit responses from a
139
greater number of students during formative assessment tasks. The section includes the

goals of the project, the rationale for selecting professional development as the project

genre, a review of the literature about the genre and content of the project, and a detailed

implementation and evaluation plan. Documents supporting the project’s implementation

are provided in Appendix A.

Project Goals

I developed this project, which resulted from my findings, to help teachers

consistently use formative assessment to check for understanding and to adjust

instruction so that students can meet learning goals. The overarching goal of this project

is to train teachers to use instructional strategies that would elicit more student responses

during formative assessment so teachers can collect more feedback about student

understanding than they currently do. As a result, teachers can make more informed

instructional adjustments to help address gaps in student learning. The overarching goal

is divided into five project goals. Each of the goals derived from the study findings and

may help support consistent use of formative assessment.

Project Goal 1

Teachers will write and align clear student learning goals using state and district

standards for each lesson 100% of the time. Even though several participants posted or

stated learning goals for the lessons I observed, the goals were often not specific and not

aligned with their formative assessment. Determining what goals students need to reach

is critical in the formative assessment process so that student understanding represents

progress toward a defined goal. Teachers can detect any misunderstandings from student
140
responses and decide what instructional adjustments should be made to help bridge the

gap between what students currently understand and the established learning goals.

Project Goal 2

Teachers will collect formative assessment feedback from most of their students

using whole student response OTR strategies at least once every 20 to 30 minutes at the

rate of three questions per minute for non-written responses and one question per minute

for written responses. Data showed that participants collected limited information about

student understanding during formative assessment such as warm-ups, exit slips, and

formative questioning—the most used formative assessment strategies at Hammond.

Without a comprehensive picture of student understanding, teachers cannot determine

what instructional adjustments, if any, students need to meet learning goals. Participants

conveyed that they would like to learn more about and improve upon how they

implement formative assessment. Participants recommended professional development

training and collaboration as supports.

Project Goal 3

Teachers will adjust their instruction daily as needed using student feedback

collected during formative assessment to help students meet learning goals. Data showed

that participants based their instructional adjustments on a few students’ responses. The

most common adjustments were made during class after formative questions. Feedback

from warm-ups and exit slips rarely resulted in instructional adjustments. To bridge the

gap between current student understanding and the intended learning goals, teachers must
141
collect and use student feedback from formative assessment to make instructional

changes that may progress learning.

Rationale

School leaders expressed a concern over the lack of consistent use of formative

assessment to check for student understanding and to adjust instruction. As a result of

my project study findings, I established a professional development plan to provide

opportunities for teachers to learn instructional strategies and techniques that may help

them elicit responses from more students during formative assessment. Therefore,

teachers may determine their students’ current levels of understanding so they can adjust

their instruction accordingly. After the initial professional development sessions,

teachers will receive ongoing support from their involvement in PLCs, which were

already established at Hammond. In fact, Dehdary (2017) declared, “There is no doubt

that PLC should be added to the recipe of teacher development” (p. 652). Collaborative

support during PLCs may help sustain what teachers learned in the professional

development sessions.

I chose professional development for my project based on several factors.

Participant interview data showed that professional development was a logical choice to

address formative assessment use at Hammond. Most participants indicated that they felt

they needed to learn more and to improve their practice through professional

development. Van den Bergh, Ros, and Beijaard (2015) stated that teacher willingness to

acquire new skills was a solid foundation for professional development learning.

Participants thought that collaborating with other colleagues about formative assessment
142
practices would be a valuable method by which they could be supported. Participants

desired time to plan, develop, and share formative assessment implementation ideas and

strategies with other colleagues, which can be incorporated into professional

development sessions and PLCs.

Professional development can also be an effective approach for developing

targeted areas of skills about an instructional practice (Desimone & Garet, 2015; Hill,

Beisiegel, & Jacob, 2013) such as specific instructional strategies to help collect

formative assessment feedback. Moreover, professional development is especially

successful at bringing about school-wide change when it focuses on teacher practices like

instructional strategies or techniques (Desimone & Garet, 2015). Likewise, Guskey

(2017) found that professional development, such as workshops and trainings, can be a

valuable way to improve teacher practice. Training in targeted strategies enables teachers

to replicate those strategies consistently or more frequently (Kennedy, 2016). Bayar

(2014) declared that there is “no doubt in the literature regarding the potential of

professional development activities to help both novice and experienced teachers in

developing their existing skills and in acquiring new ones” (p. 321). Researchers have

also recommended school leaders provide professional development to specifically equip

teachers with formative assessment knowledge and skills to improve their

implementation (Black, 2015; Chroinín & Cosgrave, 2013). In fact, studies have shown

that professional learning opportunities had “the highest impact” on the quality of

teachers’ formative assessment practices and were “crucial” for consistent formative

assessment implementation (Heitink et al., 2016, p. 58). Many studies have shown that
143
professional development has a positive effect on teachers’ formative assessment

practices (Andersson & Palm, 2017; Cisterna & Gotwals, 2018; Furtak et al., 2016;

Kintz, Lane, Gotwals, & Cisterna, 2015; Randel, Apthorp, Beesley, Clark, & Wang,

2016). Therefore, because findings of this study showed that teachers at Hammond

would benefit from learning strategies and techniques that would allow them to

consistently implement formative assessment to check for student understanding and to

elicit responses from a larger number of students, creating professional development that

focuses on these skills would be logical. In addition, professional development has been

found to be effective in urban schools, like Hammond, where there is often high teacher

mobility. The information and materials from professional development can easily be

made available or repeated for new or incoming teachers (Desimone & Garet, 2015).

The active learning and modeling offered in this project’s professional

development may help teachers develop a comprehensive understanding of strategies that

give a greater number of students an opportunity to respond during formative assessment.

The strategies, which have been shown to be successful with a wide range of students

(Cakiroglu, 2014; Clarke, Haydon, Bauer, & Epperly, 2016; Haydon et al., 2013; Kira et

al., 2013; Messenger et al., 2017), can be immediately implemented into any classroom at

any grade level. Very few resources are needed for this project, making it a very cost-

effective plan for a school with a limited budget.

I selected PLCs to help sustain the new learning from professional development.

During traditional workshops, teachers may learn new skills and knowledge, but they

need time and support to transfer what they learned into practice (Oweis, 2014). Many
144
studies have shown that professional development training is often coupled with some

form of PLC to support and sustain learning (Kennedy, 2016). PLCs have been fully

established at Hammond for about a decade; therefore, teachers have experience working

with this type of professional learning format. The existing PLC structure may provide

ongoing support for teachers throughout the school year. Alternatively, school leaders

can allow time during staff meetings for teachers to meet in groups and to use the PLC

resources provided in this project.

Review of the Literature

The literature review I conducted related to the proposed project that was based

on my analysis of data collected at Hammond. I used the literature review and study

findings to create professional development for teachers at Hammond High School. The

professional development may provide teachers with instructional strategies and

techniques targeted to help them elicit responses from more students during formative

assessment and, therefore, improve their ability to consistently implement formative

assessment to check for student understanding and to adjust instruction. I found research

articles, publications, and books by searching the following university databases:

Academic Search Complete, EBSCO, ProQuest, SAGE Premier, Education Research

Complete, Taylor and Francis Online, Google Scholar, and ResearchGate. Search terms

included professional development (PD), formative assessment professional development,

effective professional development, professional learning communities (PLC),

professional learning, sustained professional learning, teacher professional development

(TPD), opportunities to respond (OTR), teacher-directed opportunities to respond (TD-


145
OTR), formative assessment engagement, active response strategies (ARS), total

participation techniques (TPT), and whole group response. I used peer-reviewed

resources predominately published within the past 5 years to provide current research for

the development of my project.

Professional Development

Professional development is “structured professional learning that results in

changes to teacher knowledge and practices, and improvements in student learning

outcomes” (Darling-Hammond, Hyler, & Gardner, 2017, p. 2). Therefore, professional

development was an appropriate choice to help teachers develop and expand upon

instructional strategies that may help support their formative assessment use.

Professional development generally takes the form of workshops, learning communities,

continuing education programs, and action research (Brown & Militello, 2016). For any

type of professional development to be successful at promoting positive change, several

components must be in place. There is substantial agreement in research about what

constitutes effective teacher professional development (Smylie, 2014). The following

seven characteristics of effective professional development were used to guide the

development of this project:

1. Matching School Needs. Professional development should correspond with

current school needs and should consider the school’s student population

(Bayar, 2014). Also, as Smylie (2014) indicated, professional development is

“most effective if it is a coherent part of a larger school improvement effort”

(p. 103). Coherence means that professional development goals, content, and
146
activities, are consistent with school priorities, school leader and student

needs, and teacher knowledge and beliefs (Desimone & Garet, 2015). In fact,

Desimone and Garet (2015) affirmed that teachers were more likely to

implement ideas from professional development when the ideas correspond to

school leaders’ initiatives. For many years, school leaders at Hammond have

made increasing student achievement a school priority and have selected

formative assessment as one of the strategies to support this goal. Also, this

case study was designed specifically to meet school leaders’ need to

understand teacher formative assessment practices so that leaders can support

its implementation. Improved formative assessment implementation may

result in improved student achievement. The planned professional

development is intended to assist school leaders with this goal.

2. Matching Teacher Needs. To be effective, professional development must

address existing needs of the participants (Bayar, 2014; Stewart, 2014) and

focus on issues relevant to their classroom work (Patton, Parker, & Tannehill,

2015). Teachers also want to learn instructional skills that they can

immediately implement in their classroom (Matherson & Windle, 2017). In

other words, professional development should address the real challenges

teachers encounter in their schools and classrooms. To understand what needs

exist, those planning professional development should have information about

current teacher practices so content can be prepared to bridge the gap between

current and desired teacher practices (Lauer, Christopher, Firpo-Triplett, &


147
Buchting, 2014). This study explored teacher formative assessment practices

and uncovered that there was a need to support formative assessment

implementation to collect feedback about student understanding. Addressing

this need would be beneficial to teachers’ efforts to help students meet

learning goals.

3. Communicated Intended Learning Goals. Professional development planning

should start with clear goals in mind so that the learning activities can have

“purpose, cohesiveness, and direction” (Guskey, 2014, p. 12). Earley and

Porritt (2014) found the ability to strategically conduct professional

development was connected to clear goals and intentions. Professional

development must begin with openly defined learning goals that are

communicated to the staff (Guskey, 2017). Participants should understand the

current problem being addressed and why professional development is needed

(Lauer et al., 2014). It is also important for participants to know what

outcomes are anticipated as a result of the staff training (Guskey, 2014). With

a clear focus on the professional development goals, everyone involved will

know the purpose of what they are learning and what is expected of them

during the process.

4. Focus on Specific Tasks. To be effective, professional development content

should concentrate on specific instructional tasks and teaching skills to

improve daily teaching (Patton et al., 2015). Darling-Hammond et al. (2017)

found that professional development is more likely to positively affect teacher


148
implementation if it is focused on a narrow set of practices. Data from this

study showed specific areas where teachers inconsistently implemented

formative assessment to check for student understanding. The professional

development as a result my project findings may help teachers learn

instructional strategies and techniques they can immediately implement to

support more consistent formative assessment use. One task identified in the

study that will be the focus of the planned professional development at

Hammond is to help teachers provide opportunities for all students to respond

during formative assessment. Guskey (2014) advised that instructional

practices offered at professional development must be research-based from

reliable sources so time and resources are not wasted on unproven practices.

Also, Smylie (2014) suggested that during professional development,

presenters should model new strategies so that teachers can visualize what

they look like in practice.

5. Active Learning. Professional development should be designed according to

how teachers learn. Research shows that active participation is essential for

adult professional development learning. Active learning means that teachers

are engaged in the instructional practices that they are learning (Darling-

Hammond et al., 2017); they are not just sitting and listening passively to

lectures (Bayar, 2014). Matherson and Windle (2017) found that teachers

want professional development sessions that are interactive, engaging, and

relevant. Teachers also want time to practice new strategies before


149
implementing them in their classrooms by participating in hands-on activities

such as role-playing, simulations, and problem-solving (Lauer et al., 2014;

Smylie, 2014). Teachers need time to learn and practice new strategies if they

are to change their practice (Darling-Hammond et al., 2017). Experience with

new learning helps teachers understand how to incorporate instructional

strategies into their current practices and to become comfortable with their use

(Chroinín & Cosgrave, 2013; Heitink et al., 2016). In addition, active

engagement helps teachers develop meaning from new learning, promotes

deeper understanding of concepts, and increases teacher motivation to

implement what they learned (Learning Forward, 2013; Patton et al., 2015).

At the end of professional development sessions, teachers should have time to

reflect on what they learned (Smylie, 2014). Reflection, which helps transfer

new learning into practice (Oweis, 2014), will be an important component of

PLC support for this project.

6. Collaboration. Teacher collaboration is another important feature of effective

professional development. Darling-Hammond et al. (2017) stated, “High-

quality PD creates space for teachers to share ideas and collaborate in their

learning” (p. v). Time should be allotted during professional development

sessions for group discussions that allow teachers to share knowledge,

insights, and ideas as they make meaning of new learning (Lauer et al., 2014).

Studies have also shown that teachers need to collaborate about shared

problems of practice (Heitink et al., 2016). Collaboration influences a


150
teacher’s thinking, motivation, and instructional practices (Earley & Porritt,

2014). Engaging openly with colleagues can help teachers build trust which

may encourage them to take greater risks when trying new instructional

practices (Patton et al., 2015). Collaboration should continue after the initial

professional development. Smylie (2014) found that frequent dialogue with

colleagues was effective in progressing and sustaining implementation of new

practices. PLCs can give teachers the time needed to continue collaborating

after the initial professional development sessions are completed.

7. Ongoing Support. Sustained support is crucial to successful adaption of new

learning (Earley & Porritt, 2014; Patton et al., 2015). Learning Forward

(2017) stated that sustained professional development means “intentional and

focused learning for the period of time required for successful

implementation” (p. 56). The time period should be more than one day or a

brief, independent workshop (Learning Forward, 2017). Sustained learning

also requires “frequent interaction, collaboration, and dialogue” (Oweis, 2014,

p. 27). Lauer et al. (2014) advised that continued support during the

implementation phase of professional development was essential for

sustainability. They found that providing time for regular short meetings

where teachers could collaborate by sharing experiences, successes, and

failures after implementing new skills was valuable. In fact, studies on

teacher perspectives about professional development have shown that

teachers, realizing that change takes time, want learning opportunities that are
151
supported long-term (Bayar, 2014; Matherson & Windle, 2017). Although the

actual length of continued support depends on the “desired learning objectives

and topic complexity” (Lauer et al., 2014, p. 216), for new learning to have a

lasting effect in the classroom, it should have steady support over the course

of at least a year (Kennedy, 2016; Matherson & Windle, 2017). The time

school leaders provide for teachers to engage with what they learned in

professional development is essential for sustained, effective implementation

of new learning (Desimone & Garet, 2015).

It is important to note that the above professional development components integrate

well with adult learning theory introduced by Knowles (1973). Knowles’ five underlying

assumptions, outlined by Glickman, Gordon, and Ross-Gordon (2007), were that adult

learners (a) want the reasons for the new learning and why it is important to them

(connects to effective professional development components one, two, and three above);

(b) are self-driven and put forth effort when given focused goals (component three); (c)

want opportunities to apply new knowledge (component four and six); (d) bring past

knowledge and a variety of experiences that should be used in their learning (components

five and six); and (e) are generally self-directed and active learners (components five and

six). Aligning adult learning theory with effective professional development practices

may help foster the adoption of new instructional practices that support consistent

formative assessment implementation.


152
Professional Learning Communities

As discussed in the previous section, ongoing support is required to sustain

professional development learning. One opportunity to provide this support is through

PLCs. A PLC is defined as “professional learning that increases educator effectiveness

and results for all students occurring within learning communities committed to

continuous improvement, collective responsibility, and goal alignment” (Learning

Forward, 2017, para.1). Matherson and Windle (2017) reported that PLCs can provide

sustainability of professional development so teachers can continue to improve over time.

Continued learning happens best when several components are present in the PLCs: (a)

clear mission and shared values about what the group wants to accomplish, (b) genuine

dialogue that is open and constructive and respects everyone’s thoughts, (c) collective

reflection that promotes individual growth, (d) atmosphere of trust that supports

implementation of new ideas, and (e) supportive leadership that allocates time for

teachers to meet (Dehdary, 2017). Stewart (2014) affirmed that learning is significantly

influenced when teachers are supported by peers in PLCs. Learning communities allow

teachers the time to work collaboratively so that they can monitor, reflect, and improve

on their practices (Learning Forward, 2013). In PLCs, learning is active, meaning

teachers learn with and from one another (Stewart, 2014). Oweis (2014) suggested that

to transfer new knowledge and skills from professional learning, such as traditional

workshops or trainings, teachers need a community where educators can support one

another with their implementation, give and receive feedback, discuss problems, and

work on solutions together. For my project, I plan to utilize the existing PLC structure at
153
Hammond for ongoing teacher collaboration once the initial professional development

sessions are completed.

Two areas of focus in the PLCs will be reflection on and feedback about

formative assessment practices. Reflection and feedback have been found to be

significant components of effective professional learning (Earley & Porritt, 2014;

Stewart, 2014). They are also important elements in adult learning theory (Knowles,

1973), and together they help teachers constructively transfer learning from professional

development to the context of their own practice (Darling-Hammond et al., 2017).

Engaging in reflection and feedback “support transfer of knowledge and skills into

practice as part of ongoing professional learning” (MDE, 2011, p. 12). Reflection and

feedback will not only be used to help strengthen how teachers implement strategies to

give all students an opportunity to respond to formative assessment, but it may also help

teachers to adjust their instruction based on the student feedback they collect.

Although reflection and feedback work together, each has its own role to advance

learning. For teachers, reflection means “consciously thinking about the strengths and

weaknesses of one’s practices (Van den Bergh et al., 2015, p. 143). Reflection allows

teachers to “acknowledge what works; what does not; and what additional resources,

training, and practices are needed”; however, time for teacher reflection is often missing

from sustained support (Brown & Militello, 2016, p. 706). Patton et al. (2015)

recommended teachers have time to discuss their reflections on and experiences with new

learning regularly after the initial professional development. PLCs give teachers the time

needed to reflect, and more importantly, the time to discuss reflections with other
154
colleagues. Hadar and Brody (2016) found that the benefits of group reflection in PLCs

are threefold: Reflection (a) enhances and deepens understanding; (b) invites

communication and forward thinking; and (c) promotes mutual expectations,

commitment, and action in others. Therefore, reflection not only enhances individual

leaning, but it also inspires group learning (Hadar & Brody, 2016). Unfortunately,

teachers are rarely given time to reflect on, implement, and discuss new learning; the

result is ineffective transfer of professional development learning into practice (Oweis,

2014).

As teachers implement new strategies to help them collect more student feedback,

they will have opportunities to make more informed instructional adjustments based on

the results. During PLCs, teachers can reflect not only on their experiences with

collecting student feedback, but also on what instructional adjustments, if any, they made

because of the student feedback they collected. Reflection about adjusting instruction is

especially important because studies have shown that teachers often struggle to find

meaningful ways of adjusting their instruction to address student misunderstandings

(Miranda & Hermann, 2015; Wood et al., 2016; Wylie & Lyon, 2015). Even though

observational data showed that participants at Hammond regularly adjusted instruction

after collecting limited student feedback, there was room for improvement. Furthermore,

participants acknowledged that they wanted to improve on how they adjusted instruction

after they gathered student feedback, especially if more students had an opportunity to

respond. PLCs can present opportunities for teachers and their colleagues to reflect on

and discuss ways that they can successfully adjust instruction to help students meet
155
learning goals. Instructional adjustments that can improve student understanding require

a deep knowledge of content as well as identifying how to best bridge the gap between

current levels and desired levels of student understanding (Chappuis, 2015). Stewart

(2014) found PLCs that are organized by similar academic disciplines allow for deeper

learning and support. The PLCs at Hammond were divided by content areas, so the

groups consisted of colleagues who shared subject-area knowledge. Teachers in the same

content area can provide critical dialogue about ways to adjust instruction to support

specific concepts when formative assessment feedback shows students are struggling to

understand a lesson.

Instructional feedback also plays an important role in PLC sustained support.

Darling-Hammond et al. (2017) found that professional development associated with

increased student learning regularly offered time for teachers to receive feedback about

their practices and to make necessary improvements. If teachers do not receive feedback

about how they implement new practices learned during professional development, then

they may either become frustrated because they do not know if they are implementing

them correctly, or teachers may abandon what they learned (Brown & Militello, 2016).

Discussing and observing teachers implementing instructional strategies require trust

among colleagues. Stewart (2014) advised that teachers in a PLC should feel

comfortable with one another so they can give and receive honest, constructive feedback.

Hadar and Brody (2016) discussed essential elements that need to be established in PLCs

for teachers to feel safe enough to share thoughts with one another: (a) equal status and

respect, (b) empathy and understanding of differences in instructional approaches, (c)


156
group norms that encourage risk-taking, (d) support to overcome fear of unsuccessful

implementation, and (e) administrative support for experimentation and innovation.

Open conversations about thoughts and experiences while implementing new learning

can help teachers build on one another’s ideas which “deepens and enriches both thinking

and insights” of all involved (Hadar & Brody, 2016, p. 66).

PLCs can help support and advance what teachers learned during the professional

development sessions about opportunities for students to respond during formative

assessment. Haydon, MacSuga-Gage, Simonsen, and Hawkins (2012) advised that

teachers should aim to increase the quality and quantity of their OTR strategies

throughout the school year. Reflection, monitoring, and feedback during PLCs will play

an important role in this improvement. Haydon et al. (2012) developed a series of steps

teachers can use to self-monitor their OTR implementation. The steps are as follows:

1. Determine the present level of performance by recording (i.e., tallying, using a

frequency counter app, or video recording) their OTR use for a period of 3-5

days.

2. Develop a plan to increase their OTR strategies and frequency by setting a

specific, measurable, and observable goal.

3. Monitor teacher implementation and make changes as necessary.

4. Use the data collected to graph and review rates of OTR use.

5. Adjust goals and implementation accordingly.

Teachers can discuss and reflected upon the process during PLCs. Haydon et al. (2012)

also suggested a hybrid approach to the above process. Teachers can (a) self-monitor to
157
collect baseline data, (b) share the data during PLCs, (c) receive feedback about how to

increase the quantity or quality of OTRs, (d) develop a plan with actionable goals, and (e)

discuss future data to help modify the plan. Therefore, “teachers may be both consultant

and consultee for each other, as they work to improve their practice. This symbiotic

relationship would provide both teachers with opportunities for reflective and

nonjudgmental professional development” (p. 7). I created a document for teachers to

use during PLCs that incorporates this hybrid approach and is based on the

“Opportunities to Respond Action Plan” tool Haydon et al. (2012) developed (see

Appendix A for PLC Action Plan to Increase OTRs During Formative Assessment). This

tool will support the valuable data collection, reflection, and feedback process necessary

to help teachers consistently implement formative assessment with all their students.

Collecting Student Feedback During Formative Assessment

Study findings showed that teachers collected limited feedback about student

understanding during formative assessment. The main content of the projects’

professional development sessions will consist of instructional strategies that would help

teachers elicit more student responses during formative assessment. Implementing

formative assessment in a manner that gives all students an opportunity to show their

understanding can provide teachers with more feedback about student understanding than

they currently obtain. With more student feedback available, teachers can determine how

well students understand curricular concepts and make informed instructional

adjustments to help students meet learning goals.


158
The need for whole group response. An essential feature of formative

assessment is that it allows teachers to elicit feedback about current student

understanding (Chan et al., 2014). One of the most common strategies teachers use to

gather information about student understanding is formative questioning (Fisher & Frey,

2014a; Helf, 2015). Most teachers use the IRE model during questioning: the teacher

asks a formative question, a student or several students answer, and the teacher gives

feedback on whether the answer was correct or incorrect (Duckor, 2014; Pearsall, 2018;

Wiliam, 2014). The cycle continues throughout the class session (Wiliam, 2104). Helf

(2015) found that one of two scenarios often transpire during the IRE model, either the

teacher finds that only a small number of the same students volunteer to answer the

questions, or no students answer the questions. In the latter case, the teacher usually

gives hints, uses prompts, rephrases the question, or provides the answer himself. Even if

the teacher calls on students at random, he will only be assessing a couple of students at a

time (Duckor & Holmberg, 2017; Wiliam, 2014). As Duckor and Holmberg (2017)

highlighted, teachers cannot learn much about student understanding during formative

assessment if, for example, only 10% of the students respond. Obviously, in both

scenarios, the teacher cannot fully assess the level of understanding of the class if most

students are not demonstrating what they know. Teachers may think they have

determined what their students know through formative assessment, but the inadequate

feedback about most students’ understandings does not supply enough information to

truly determine where a class stands in relation to the learning goals (Wiliam, 2014).

Furthermore, correct answers from the few students who respond can be problematic.
159
Duckor (2014) found that if teachers receive the answer they are looking for, they usually

conclude that all students understand. Likewise, Kira et al. (2013) disclosed that teachers

believed all of their students would respond similarly to the few students who gave

responses during formative questioning. These conclusions influence teachers’

instructional decisions about how to proceed with the lesson. Therefore, the IRE method

only provides limited student feedback that teachers can use to adjust instruction to help

students meet learning goals. As Duckor and Holmberg (2017) stated, for teachers to

make sound instructional decisions during class, they need adequate feedback about all

their students’ understandings. Therefore, teachers need to elicit responses from the

whole group. Whole group response “means that all students in the class have frequent

opportunities to respond”; furthermore, whole group response strategies will “promote

whole-class participation” (Tincani & Twyman, 2016, p. 13). It is only when whole

group response strategies are used to increase class participation, such as during

formative questioning, that teachers can make informed conclusions about what their

students understand (Duckor, 2014).

Wiliam (2018) discussed another problem with the IRE model affects gathering

feedback about student understanding: Students view answering questions in class as

optional. Many students choose not to participate; instead, they often sit and wait for

other students to answer. When students view participation during formative assessment

as optional, they often become unnoticed in class, meaning these “students’ thinking goes

undetected—for hours, days, or even weeks” (Duckor & Holmberg, 2017, p. 170).

Consequently, teachers may discover from a summative assessment, such as a unit test,
160
that students did not fully understand the content. By then, it is often too late to address

misunderstandings and to reteach concepts. Pearson (2018) advised that if teachers want

to “build a sustainable and effective assessment practice . . . then moving away from an

IRE model of response is crucial” (p. 30). Instead, teachers need to implement formative

assessment in a manner that gives the whole class an opportunity to show their

understanding. Eliciting feedback from the whole class during formative assessment can

help teachers develop a sense of what their students understand. Whole group response

strategies allow teachers to check for understanding and collect feedback from all

students at the same time (Nagro et al., 2016). Studies conducted by Johnson et al.

(2013) showed that collecting feedback from all students was especially beneficial for

urban schools, like Hammond. Effective teachers in their study repeatedly used whole

group response during formative assessment to assess student understanding.

For teachers to collect more feedback during formative assessment, they need

more students to participate; therefore, teachers must change their formative assessment

implementation to include whole group strategies. Many studies have shown that when

teachers consistently used whole group response strategies in class to elicit student

feedback, student participation increased (Cakiroglu; 2014; Duckor & Holmberg, 2017;

Haydon et al., 2013; Heritage & Heritage, 2013; Messenger et al., 2017; Tincani &

Twyman, 2016). Cakiroglu (2014) found student engagement increased when teachers

used whole group response strategies in class; students were more inclined to answer

questions and to show their thinking. He found that the mean percentage of student

responses during traditional hand-raising was 27.5, and during the use of a whole group
161
response strategy, the mean percentage was 91.45. Likewise, Messenger et al. (2017)

discovered that the whole class response format resulted in greater student participation

than the IRE method. Another significant finding of their study was that implementing

whole group response was a “feasible strategy that could be implemented with high

fidelity” (p. 182). Haydon et al. (2013) also compared whole group response to

individual response and found that whole group response strategies, such as choral

responding and response cards, not only increased student participation, but the

implementation of these strategies also resulted in higher academic achievement. Many

other studies also found increased participation and student achievement resulted from

teachers regularly implementing whole group response strategies that gave students

opportunities to answer during formative assessment (MacSuga-Gage & Simonsen,

2015). Furthermore, studies have shown that whole group response strategies are

successful at increasing the participation of students with learning disabilities, behavioral

disorders, intellectual disabilities (Haydon et al., 2013), anxieties, shyness, lack of

confidence, and off-task behaviors (Messenger et al., 2017). Often these students do not

volunteer to participate during formative questioning. The result is teachers collecting

and responding frequently to the participating students while inactive students are

regularly overlooked (Kira et al., 2013; Wiliam, 2018). Special education students

(Clarke et al., 2016), low-achieving students, and general education students all benefit

when their teachers implement whole group response strategies during formative

assessment (Cakiroglu, 2014). Using an instructional strategy that allows a wide range of
162
students to participate more fully in formative assessment is especially important in

inclusive classroom settings where many teachers find themselves teaching.

Opportunities to respond. When teachers use whole group response strategies

during formative assessment, they encourage all students to show their understanding.

Whole group strategies that enable all students to simultaneously participate more fully

during teacher-directed formative questioning are often called Opportunities to Respond

(OTR) (MacSuga-Gage & Simonsen, 2015). OTR works as an instructional strategy that

can help teachers quickly reveal what students understand during formative assessment

and if they should make any immediate instructional adjustments to facilitate learning

(Menzies, Lane, & Oakes, 2017). Although there is never any guarantee that all students

will participate during formative assessment, OTR strategies have been shown to increase

the likelihood of student participation by offering every student in class an opportunity to

participate (MacSuga-Gage & Simonsen, 2015; Menzies et al., 2017). With OTR,

students have frequent opportunities during class to provide teachers with feedback about

their understanding (Duckor & Holmberg, 2017; Messenger et al., 2017). Therefore,

teachers can collect more feedback about student understanding more frequently

(Andersson & Palm, 2017). OTR strategies are a type of Active Response Strategy

(ARS) (Tincani & Twyman, 2016) and are considered a Total Participation Technique

(TPT) (Himmele & Himmele, 2017).

OTR strategies. There are a wide range of OTR strategies teachers can use

during whole group instruction to give all students an opportunity to respond during

formative assessment so that teachers can gather feedback to make informed instructional
163
adjustments These instructional strategies are useful ways for teachers to engage students

in formative assessment, to quickly collect feedback to determine students’ levels of

understanding, to immediately adjust instruction (Wiliam, 2014), to inform future

instruction, and to monitor student progress over time (Nagro et al., 2016). Also

noteworthy is that OTR strategies allow teachers to provide instant feedback to students

about their responses. This teacher feedback is an important part of the formative

assessment process (Heitink et al., 2016). Immediate feedback during OTR

implementation is “critical because it improves accuracy of students’ responses,

encourages participation, and discourages off-task and disruptive behaviors” (Tincani &

Twyman, 2016, p. 13).

OTR strategies can be grouped into verbal, gestural, written, or technological

methods of responding (Duckor & Holmberg, 2017; Messenger et al., 2017; Nagro et al.,

2016). There is a wide variety of strategies and techniques under each category, many of

which can be tailored to the teacher’s instructional style or to their classroom setting.

The following represent a few commonly implemented OTR strategies:

Verbal OTR strategies. The main verbal whole group OTR strategy is choral

response. Choral response “involves asking all students the same questions, giving wait

time, and then giving them a signal that cues them to provide a response in unison”

(Whitney, Cooper, & Lingo, 2017, p. 3). An example is to ask all students, “What is a

negative ion called?”, waiting for five seconds, and giving students a cue to answer

together aloud. Students could also respond in unison to a question with a choice of

answers such as “acute, right, or obtuse” when shown pictures of angles. Menzies et al.
164
(2017) advised that having a cue for students to simultaneously answer was essential.

They suggested teachers use a gesture such as raising an arm, say a verbal cue word,

display a visual such as the word “answer” on a screen, or a combination of these. Center

on Innovations in Learning (2016) suggested two instructional moves teachers can

execute after receiving student feedback from choral response: (a) If only a few incorrect

answers are heard, teachers can restate the answer with the question (for example, “Yes, a

negative ion is called an anion.”) and then present the same question again later for

reinforcement; or (b) If many students answer incorrectly, then the teacher should state

the question with a brief explanation of the correct answer, immediately ask the same

question again using choral response, and then present the question again shortly after.

Tincani and Twyman (2016) also recommended that after a choral response, teachers ask

individual students to repeat the answer. This move can confirm understanding and

reinforce new learning.

Gestural OTR strategies. Gestural strategies “allow students to use their hands to

provide a response that indicates either an answer to a question or to indicate a level of

understanding of the lesson content” (Whitney et al., 2017, p. 3). These OTR strategies

can give fast feedback to teachers during instruction to verify if students understand

concepts they are being taught (Nagro et al., 2016). They also prevent students from

becoming discouraged when they do not understand content during a lesson because

teachers are addressing their misunderstandings regularly (Nagro et al., 2016). One

gestural strategy example is when a teacher asks students to use their fingers to show a

scaled response to a question to indicate their level of understanding. For example,


165
showing one finger means no understanding, three fingers show partial understanding,

and five fingers signify total understanding (Whitney et al., 2017). Teachers can also

have students use their fingers to give more detail about their understanding. For

example, one finger up means “I do not understand,” two fingers up show “I think I get

it,” three fingers up mean “I understand,” and four fingers up represent “I understand and

could explain it to someone” (SRI, 2017). Teachers can determine what they want each

gesture to indicate and post it in the classroom while students learn how to use this

strategy. Gestures can also be used for simple responses such as one finger means true

and two fingers mean false. Teachers can also post formative questions on the board with

possible answers numbered underneath; students indicate which answer they believe is

correct by a show of fingers (Whitney et al., 2017). Another common gestural response

strategy is “thumbs up, thumbs down.” Students show a “thumbs up” to indicate “Yes,”

“I agree,” or “I understand”; they show a “thumbs down” to signal “No,” “I disagree,” or

“I do not understand” (Fisher & Frey, 2014a). Students can also use a sideways thumb to

show they are not sure of the answer. To have answers be more private, teachers can

recommend that students close their eyes during the gesture or hold their gesture closely

in front of them.

Written OTR strategies. There are several types of written OTRs teachers can

implement during formative assessment. Pre-printed response cards are reusable signs

students display to show their answer to a teacher-directed formative question (Helf,

2015). They, like verbal and gestural OTRs, give all students an opportunity to respond

simultaneously during formative assessment so that teachers can collect feedback about
166
student understanding (Tincani & Twyman, 2016). Pre-printed response cards are often

flash-card sized, reusable, answer response options for multiple-choice (A, B, C, D), true

and false, agree and disagree, or yes and no questions (Cakiroglu, 2014; Helf, 2015;

Nagro et al., 2016). The cards can include other types of responses such as vocabulary

words, foreign language words, pictures, numbers, or symbols. When using pre-printed

response cards, the teacher asks a formative question to the whole class, uses wait time to

allow students time to think and select the appropriate card, and then cues students to

display their cards (Cornelius et al., 2016). Teachers can then quickly scan the cards to

check for student understanding. At any time, teachers can make instructional decisions

about whether to move on with the lesson, to reteach the whole class, or to work with

small groups or individual students (Helf, 2015; Menzies et al., 2017). For example,

halfway through a lesson, a teacher displays on the screen a multiple-choice question to

assess student understanding of the content they are learning. She reads aloud the

question and the four answer options. Students are instructed to choose their answer by

selecting “A,” “B,” “C,” or “D” from the pre-printed response cards at their desks. After

10 seconds, she says, “Cards up,” and the students display their answers. The teacher

quickly scans the class and notices about one-third of the students holding up “D” instead

of the correct answer, “B.” She reveals the correct answer and decides to review the

misunderstood concept again, this time using an analogy. The teacher can then ask

students to display a gestural strategy to quickly show if they understand the concept

better. Teachers can also have students use write-on response cards, which are

whiteboards or laminated sheets of paper, to display their answers (Tincani & Twyman,
167
2016). Having students write on a whiteboard provides more flexibility in the answers.

Although this write-on tool is a good way to gain more insight into student understanding

during formative assessment, Duckor (2014) advised teachers to only have students write

numbers, letters, or a few word responses on whiteboards so that teachers can quickly

assess answers and determine next steps.

Another type of response card is called a processing card. Processing cards are

green, red, and yellow cards students display to show their level of understanding

(Himmele & Himmele, 2017). During formative questioning, students can hold up a

green card to indicated that they understand, yellow card to show that they somewhat

understand, and a red card to indicate they do not understand. Students can also show

their cards during a lesson to determine if they are “good and ready to move on” (green

card), “okay and almost ready to move” (yellow card), or “confused and not ready to

move on” (red card). Teachers can ask probing questions to students with red or yellow

cards to decide how to adjust instruction to close any gaps in learning (Duckor, 2014).

The cards can also be used during independent practice. Students can display the

appropriate color on their desks to show their understanding as they work. Teachers can

scan the class to quickly determine who needs more support. If students place a green

card up, it shows “I get it, I can do this by myself,” a yellow card indicates “I sort of get

it, but would like more help,” and a red card means “I am stuck, I need help.” The

teacher can work in small groups with students displaying red or yellow cards or pair

them with students with green cards.


168
Because the answers are pre-determined or short, response cards are primarily

used for convergent or low-level formative questions (Nagro et al., 2016). However,

teachers can also ask high-level clarifying follow-up questions after receiving students’

initial responses. Low-level questions are important to the learning process, but follow-

up questions can reveal student thinking at a deeper level (Jiang, 2014). These questions

may help teachers understand why students answered the way they did. As Duckor and

Holmberg (2017) and Wiliam (2018) advised, teachers must seek more than correct

responses; they need to learn about student understanding, see patterns in student

thinking, and uncover misconceptions. When teachers collect more feedback about

student understanding, they can make more informed decisions about “what to teach,

reteach, or even preteach” (Duckor, 2014, p. 31). Himmele and Himmele (2017)

recommended that teachers regularly ask students to explain their thinking during any

whole group OTR strategies by choosing students with correct or incorrect answers to

expand on or defend their responses. One way that Pearsall (2018) suggested teachers

learn more about student understanding is by simply asking them, “What is your

reasoning behind that answer?” or “Why did you choose that answer?”

If teachers want to delve deeper into student understanding, they can have

students write extended responses on paper or in an electronic document. These written

responses are often used when asking students open-ended or divergent formative

questions (Nagro et al., 2016). Teachers must keep in mind that formative questions

should be written in a way that give them feedback about students conceptual

understanding (Fisher & Frey, 2014a). The questions should be crafted in advance to
169
help uncover student thinking and common misconceptions (Himmele & Himmele,

2017). There is a multitude of extended response OTRs that teachers can use during

formative assessment including one sentence summaries, quick writes, 3-2-1, sentence

stems, and learning logs. Descriptions of these strategies, along with others, can be found

in Appendix G (Whole Group OTR Strategies by Category).

Teachers often use written OTR strategies for students to show what they learned

after a lesson at the end of class. Teachers should use the feedback they collect from

students to inform their next lesson (Cornelius, 2014). Whatever OTR strategy teachers

use, they need to review the responses to understand what their students know or do not

know. One technique to review student understanding is to skim over the answers and

place them into two piles: students who understand the concept and students who do not

understand (Dixon & Worrel, 2016). Teachers can then decide whether they will need to

reteach a concept to the whole class, to place students into groups based on their levels of

understanding, or to assist individual students.

Technological OTR strategies. Technology can be another advantageous OTR

strategy that helps teachers collect formative feedback from all students. Technological

OTRs can be implemented with devices (e.g., clickers, cell phones, computers, and

tablets) software programs, websites, or apps. Many of these technologies are response

systems known as connected classroom technology (CCT). CCT is interactive,

informational communication technology that allows teachers to quickly gather data on

student understanding so they can give immediate feedback and make real-time

instructional adjustments (Shirley & Irving, 2015). For example, a teacher displays a
170
slide-show presentation with embedded formative assessment questions (using a software

program or web application) during the lesson. Students use their devices to

simultaneously answer the questions. The teacher receives immediate feedback from

student answers and, based on the student feedback, decides if the content needs to be

retaught. Some examples of CCT include Kahoot, QuizletLive, Poll Everywhere, Google

Forms, Socrates, Mentimeter, and clickers. Clickers are popular educational hand-held

devices also known as student response systems; they are a quick, efficient way to collect

honest feedback from students and to encourage participation (Fuller & Dawson, 2017).

Landrum (2013) found 83.1% of the students surveyed in his study commented that they

participated more when teachers used clickers to assess their understanding. Likewise,

Shirley and Irving (2015) found that CCT increased student engagement, which gave the

teacher a comprehensive understanding of student learning and allowed students an

opportunity to evaluate their own learning from the immediate feedback teachers

provided. Student responses from CCT, such as clickers, can be displayed anonymously

on a screen in the classroom, giving teachers immediate data (Fisher & Frey, 2014a).

They can then make quick and informed instructional decisions regarding next steps for

learning. For example, after students respond to a multiple-choice question with their

CCT devices, the teacher sees on the screen that there is a variety of answers. She can

then choose several ways to address the student misunderstanding. For instance, the

teacher can (a) acknowledge the confusion, give the correct answer, and explain why the

answer is correct; (b) show why one of the answers was incorrect and allow students to

choose again, or (c) give students time to talk with a partner and choose again. Software
171
programs associated with CCT devices and other online student response applications

also have “inbuilt reporting functionality” that can “provide teachers with quantitative

and qualitative information about learning, at the classroom level as well as the individual

level, which can be used to inform teaching” (Perrotta & Whitelock, 2017, p. 133).

Although non-technological formative assessment tasks can provide the same

outcomes as technological formative assessment tasks, using technology is often a more

valuable and less time-consuming way teachers can check whole group understanding

(Fisher & Frey, 2014a; Perrotta & Whitelock, 2017). Technology can provide teachers

with more accurate feedback about student understanding than traditional methods

because technology gives all students an opportunity to respond in an anonymous way

that makes it low-risk to participate (Chan et al., 2014).

Managing OTR materials. Himmele and Himmele (2017) suggested teachers

prepare a kit to help students quickly retrieve any OTR tools they need in class. These

kits can be kept in plastic containers, bags, pocket folders, manila envelopes, or zippered

pouches. The recommended items for kits include a laminated piece of light colored

construction paper for a simple whiteboard; a dry-erase pen; a felt square for an eraser; a

set of laminated, pre-printed response cards (e.g., true/false, ABCD, agree/disagree);

index cards; pre-printed or blank half-sheets for extended writing responses (e.g., quick-

writes, sentence stems, short answer responses); and green, yellow, and red processing

cards. Teachers can place kits in a central location for students to pick up when

prompted, leave kits at student desks, pass out kits when needed, or have students keep

their own kits in a folder.


172
Establishing OTR student expectations. When teachers introduce OTRs to

their students, they should set up expectations. Firstly, students should be taught how to

gather, use, and put away OTR materials to reduce downtime and increase efficiency

(Helf, 2015). After teachers review routines and procedures, the following elements,

adapted from Menzies et al. (2017), can help them to smoothly implement the new OTR

strategies. Teachers should inform students:

1. The purpose of OTRs is to show the teacher what you understand about the

lesson; teachers will use the information to learn what areas you need help.

2. All students are expected to participate.

3. Students must remain in their seats and respond only using the given OTR

strategy.

4. Do not respond until the teacher gives the cue or signal.

5. The pace will be rapid, you will have to pay attention.

6. Correct answers will be provided after all students respond.

7. The focus is on understanding why an answer is correct, not just having the

correct answer.

Menzies et al. (2017) suggested practicing an OTR strategy with a few fun and easy

questions so that students can become accustomed to the process. They also cautioned

teachers to not become frustrated when first implementing OTRs, as students may need

time to grow accustomed to using the new strategies.

Implementing an OTR strategy. Menzies et al. (2017) suggested seven steps to

follow when implementing an OTR strategy. These seven steps can be used with any
173
type of OTR strategy. Teachers at Hammond will learn about OTR strategies and

practice the following steps during professional development:

1. Identify the lesson content to be taught and the learning goals.

2. Prepare a list of questions or prompts related to the content and aligned with

the learning goals.

3. Determine how you will deliver your questions (e.g., PowerPoint, paper,

orally, board)

4. Determine how you want the students to respond to your formative questions

by choosing an OTR strategy (e.g., choral response, response cards, gestures,

clickers)

5. Let students know you are conducting a whole group response activity where

everyone will have an opportunity to respond. Review expectations and the

purpose of the formative assessment until students are comfortable with the

process.

6. Conduct the lesson, asking the planned formative questions when appropriate

and having students use the chosen OTR strategy.

7. Respond to student answers with positive or corrective feedback. Determine

if any further explanation or instructional adjustments need to be made to help

bridge the gap between what students currently understand and the intended

learning goals. If student answers are correct, move on with the lesson; if

there are misunderstandings, address them immediately or in the next lesson.


174
After teachers and students become familiar with the OTR process, teachers

should begin to increase the number of OTRs they use. Researchers have conducted

studies that determined what OTR rates yield the best results (Messenger et al., 2017;

Whitney et al., 2015; Wiliam, 2014). In other words, they established how often teachers

should use OTR strategies during class and how many questions they should give to

students during each OTR session. Wiliam (2014) suggested that teachers implement

whole student response strategies at least once every 20 to 30 minutes to “ensure that

their decisions are based on the learning needs of the whole class” (p. 19). Messenger et

al. (2017) recommended that teachers implement OTRs at the rate of three questions per

minute for non-written responses and one question per minute for written responses. This

means during 3 minutes of formative assessment, teachers should invite all students to

respond to nine questions. Whitney et al. (2015), who noticed that teachers from all

content areas and grade levels implemented OTRs at low rates, found it was crucial for

teachers to keep high OTR rates to positively affect student learning. Although high

OTR rates are beneficial, it is still important to allow wait time for students to process

information before asking them to respond. Duckor (2014) recommended using a visual

timer or stopwatch during OTR strategies to ensure wait time is provided. Following the

steps for implementation and striving to increase OTR rates in the classroom will be

important for successful adaptation of OTRs into practice.

Summary

The second literature review focused on the genre and content of the project

developed from study findings. I developed a deeper understanding of the components of


175
effective professional development, and these elements, along with adult learning theory

(Knowles, 1974), guided my project design. Participants in the study only collected

limited information about student understanding during formative assessment; however,

research has shown that teachers should implement whole group response strategies, such

as OTRs, to gain feedback from all students to progress learning (Duckor & Holmberg,

2017; Haydon et al., 2013; Messenger et al., 2017; Tincani & Twyman, 2016).

Professional development sessions can be successfully used to help teachers learn about

and implement new instructional strategies. Therefore, I focused mainly on OTR training

in this project study’s professional development. Teachers will be taught four main

categories of OTR strategies: verbal, gestural, written, and technological. Teachers will

learn a wide range of strategies they can implement within each category—all shown to

increase student responses. All the strategies encourage student participation by inviting

students to show what they understand during any point of a lesson. So, by implementing

OTR strategies, teachers can provide all students opportunities to respond during

formative assessment. When teachers elicit greater feedback about students

understanding, they can use the information to make more informed decisions about their

instructional adjustments. PLCs offer teachers time for collaboration that can be used to

support and sustain the new learning. Through reflection and feedback, colleagues can

engage in constructive conversations that can strengthen their OTR practices during

formative assessment and increase student learning.


176
Project Description

The project resulting from the findings of this study consists of three day-long

professional development sessions and of year-long support during PLCs. The

professional development sessions will provide teachers with instructional strategies that

may increase students’ opportunities to respond during formative assessment so teachers

will have the necessary feedback to make informed instructional adjustments.

Collaboration in PLCs, where teachers can reflect and provide feedback on formative

assessment and OTR implementation practices, may help support and sustain the new

learning. The project also addresses the barriers and supports the participants voiced in

the interviews including student participation during formative assessment, collecting

student feedback quickly, time to collaborate about implementation, incorporating

technology, and wanting effective research-based strategies.

In this section, I will review the components needed for implementing the project.

Discussions include existing supports available at Hammond, resources needed, potential

barriers to the project, and possible solutions to the barriers. In this section, I describe the

project implementation and timeline. All supporting documents are found in Appendix A.

I also discuss the roles and responsibilities of those involved in the professional

development, implications of the project, and plan for evaluating the project.

Existing Supports and Resources Needed

Teachers at Hammond typically have full-day professional development the week

before the start of each new school year. The 3-day instructional sessions planned as part

of this project may be accommodated during this time. PLCs, which have been fully
177
established at Hammond, meet by content department twice a month after school for an

hour and a half. PLCs usually involve collaborative time to review ongoing schoolwide

or departmental initiatives, discuss instructional practices, read about new trends in

education, examine student data, or develop lessons. This project will require 30 minutes

of PLC time each meeting for teachers to reflect on and provide feedback about OTR

implementation and instructional adjustments resulting from student formative

assessment feedback. Because PLCs have been a long-established structure at

Hammond, teachers are familiar with the format and actions needed to participate in a

productive learning community.

School leaders at Hammond also support the use of research-based and data-

driven strategies to increase student achievement. They specifically chose formative

assessment as one of the instructional strategies to include in their School Improvement

Plan. For the past several years, leaders have encouraged teachers to use formative

assessment to check for student understanding and to adjust instruction through

schoolwide initiatives such as weekly formative assessment cycles, governance board

presentations, and warm-up and exit slip use. Formative assessment implementation is

also a component of the school’s teacher evaluation process. Teachers and

administration, therefore, have a vested interest in the implementation of formative

assessment. Study data showed participants believed that regular formative assessment is

beneficial, and they also acknowledged a need to learn new strategies to improve their

formative assessment practices. All of these supports help strengthen the possibility of

the successful implementation of this project.


178
Besides time for training and collaboration, this project requires very few

additional resources. As Nagro et al. (2016) stated, whole group OTR strategies can be

easily implemented in schools with nominal resources. The minimal cost of this project

is sure to be a welcomed element in a school with a limited budget. Resources for the 3-

day professional development sessions include index cards, chart paper, markers, copies

of agendas and other handouts, a projector and screen, and meeting rooms. Resources

needed for teachers to create OTR tools include colored construction paper, lamination,

dry-erase markers, ring clips, and copy paper. These materials are found in the standard

school supplies budget. If costs allow, I recommend school leaders purchasing classroom

sets of mini whiteboards; if costs do not allow, then laminated card stock paper,

disposable plastic plates, or colored paper in a plastic sleeve are economical alternatives.

For teachers who want to use technological OTRs during formative assessment,

classroom sets of clickers are currently available at Hammond as well as class sets of

laptops and tablets.

The third day of the professional development includes two 90-minute technology

training sessions options for teachers. Therefore, two district instructional technology

coaches are needed to present during the time allotted. Before the professional

development sessions, I will need time to meet with the technology coaches to explain

what the training sessions entail. One coach will present about using clickers as an OTR

tool during formative assessment, and the other coach will present about using Google

Forms. Each technology coach will have 60 minutes to demonstrate on how to set up

their designated tool and how the software data collection allows teachers to collect
179
feedback about student understanding. Teachers will have an additional 30 minutes to

apply what they learned to create a formative assessment for their classroom while

coaches provide technical and instructional support.

Potential Barriers and Possible Solutions

Hammond school leaders arrange professional development for all teachers the

week before each school year begins. The schedule allows 4 days for professional

development and a day for classroom preparation. School leaders may need a couple of

days to discuss matters such as classroom procedures, school rules and protocols, new

programs, changes to existing programs, analyzing student data, and school improvement

initiatives. Therefore, only 2 days may be available before school starts for delivering the

professional development sessions outlined in the project. In this case, I would suggest

presenting the first two sessions during those days. The third day’s content, which

involves increasing student opportunities to respond through using technology, could be

divided into smaller segments and discussed during PLCs. Teachers could implement the

technological OTR strategies in their classrooms, reflect on the implementation, and give

feedback about the successes and challenges they encountered. Another possible solution

may be to present the third session during a future professional development day that the

district allocates for its schools (usually one day per marking period). Teachers can

concentrate on implementing the strategies from the first two sessions until they learn the

technological OTR strategies from the third session.


180
Proposal for Implementation and Timetable

The 3-day professional development sessions will occur the week before school

starts, which Hammond’s district allocates for teacher professional development. I will

present each session using the PowerPoints and materials found in Appendix A. To

begin Day 1’s session, I will have an opening activity to engage teachers with one

another by asking them to reveal interesting facts about themselves. After introducing

myself, I will share the purpose of the professional development project that I developed

as a result of my study findings. I will establish norms to set expectations for our work

and then communicate the session’s learning targets. Teachers will complete the pre-

assessment column of the Teacher Formative Assessment Practices Survey (self-created)

to self-evaluate in three categories that are addressed in the project: clear learning targets,

formative assessment practices, and student feedback and adjusting instruction. The

survey, which contains questions about learning goals from all three professional

development sessions, will serve as baseline data for one component of the project

evaluation. I will compare the answers on the pre-assessment survey I give teachers to

answers on the post-assessment survey teachers will take at the end of the school year.

Before presenting about formative assessment, I will have teachers work in a

group to complete the Developing a Definition activity to reflect on the components of

formative assessment and to develop a common understanding of what this practice looks

like in the classroom. During the activity, teachers will individually write what they

believe are the main components of formative assessment. They will then share their

answers with the group and cluster similar ideas together. The teachers will come to a
181
consensus on the key components, construct a group definition, and write their definition

on poster paper. All groups will share their definitions and display their posters.

Together, the teachers and I will craft a final school definition so that all staff will have a

common understanding of formative assessment. Next, I will give a presentation to

discuss the benefits of using formative assessment regularly with students, to share

research linking consistent formative assessment and student achievement, and to reveal

the need for consistent use of formative assessment in the classroom. I want teachers to

understand the potential this research-based process has to positively affect student

achievement in their school. After the presentation, teachers will have time to discuss

their experiences with formative assessment and chart their challenges, successes, and

implementation questions. Groups will share their thoughts to develop a rich

conversation about teachers’ formative assessment experiences. After a short break,

teachers will discuss Tomlinson’s (2014) article “The Bridge Between Today’s Lesson

and Tomorrow’s.” I will email teachers a link to this article to read prior to the session.

Using the Four A’s Protocol (SRI, 2017), each group will discuss the article by sharing

what they think the author assumed, what they agreed with in the text, what they want to

argue with in the text, and what parts of the text they want to act upon. I will listen to

conversations and ask questions to advance their thinking. Groups will finish the

discussion by writing three statements that they found notable onto poster paper. They

will share their statements with the whole group and then later hang the posters in the

teacher’s lounge as a reminder of our work. To check teachers’ understandings of

formative assessment, I will ask them to choose three of the six pictures I display on the
182
screen and to write how the photo is like formative assessment. I will continue to model

formative assessment strategies, such as the picture analogy task, throughout the sessions

to provide teachers with ideas they can use in their classrooms.

I will next introduce clear learning targets. Before I begin my PowerPoint

presentation, I will distribute the Learning Target Anticipation Guide to teachers to

activate their thinking on the topic. They will read the 10 statements on the handout,

mark whether they agree or disagree in the “before” column, and then place their

handouts in an envelope in the middle of the table. I will then present about learning

targets: what they are, why they are needed, and their connection to formative

assessment. Teachers will have an opportunity to reflect on their current learning target

use by writing examples of their learning targets and answering a series of questions such

as (a) Are your learning targets developed from content standards? (b) Are they focused

or broad? (c) Can you evaluate whether or not a student reaches the target? (d) Are they

clear to students or vague and confusing? (e) Do you regularly check that all your

students understand the learning targets?

Next, I will explain the basic structure of a clear learning target by using

Tomlinson’s (1999) KUD learning goal model that asks teachers what they expect

students to know, understand, and do. The learning targets will be based on the “know”

and “do” of the model, while the “understand” is the overall key idea or generalization of

the unit. Action verbs are needed to determine what students should know and do. To

engage teachers’ thinking, I will have them participate in an ABC Brainstorm activity.

Each teacher will receive a handout that has the letters of the alphabet listed with a space
183
after each letter. They must think of action verbs associated with what they want students

to know and be able to do that begin with each letter. For example, “A” could be analyze

and “B” could be build. At the end of 5 minutes, teachers will circle five main verbs they

regularly use in their learning targets. I will distribute The Learning Target Verbs Based

on Level of Complexity handout, constructed from the new Bloom’s taxonomy levels

(Anderson & Krathwohl, 2001), as a resource for teachers. They should aim to create a

few learning targets at the knowledge level (the “know” in the KUD) and progressively

develop more complex learning targets for students to achieve. In addition to containing

an action or measurable verb, clear learning targets should be specific, concrete, and

written in student-friendly language. To test teachers’ abilities to recognize clear

learning targets, I will display 10 learning targets and ask them to determine which ones

are well-written. After working individually, teachers will compare their answers with a

partner and debate any differences. I will then review the answers with the group.

Besides writing clear learning targets, teachers should share each day’s learning

targets so that students understand what teachers expect them to know or do as a result of

the lesson. Therefore, I will discuss research about the importance of communicating

learning targets to students. Teachers will watch a short video clip, with source

permission, of a teacher communicating the learning target with students in his class and

reinforcing the learning target throughout the lesson. Teachers will then share ideas

about how they currently communicate learning targets with their students, if they do so,

and I will provide additional strategies.


184
Teachers will start the second half of the session by completing the right column

on their Learning Target Anticipation Guide, labeled “after,” to show their new

understanding of clear learning targets. I will use a gestural OTR strategy to formatively

assess teachers on their learning. Next, teachers will learn a four-step process to writing

clear learning targets from standards: (a) determine the standards you will address in the

lesson, (b) determine what you want students to understand, (c) determine what you want

students to do, and (d) determine what you want students to know. During each step, I

created questions to direct teacher learning. I will give teachers an opportunity to

practice writing clear learning targets from standards by presenting a set of standards and

having them work as a group to complete the Learning Target Planning Sheet that I

developed to help guide teachers through the process. I will circulate the room and assist

the groups as needed.

Lastly, I will give teachers time to practice writing clear learning targets for their

classes. They will meet with their PLC groups and develop the first marking period

learning targets for their classes using the Learning Target Planning Sheet. Teachers

should collaborate with colleagues who teach the same classes and ask for feedback from

their PLC group. I will visit groups to examine their work and give feedback. When

PLC groups return, they will briefly share what they accomplished with the whole group.

Day 1’s session will close with a discussion about insights, questions, or lessons learned.

Teachers will complete an exit slip about (a) the importance of clear learning targets to

formative assessment implementation, and (b) a comparison of their previous and current

learning target writing.


185
I will begin Day 2’s session by modeling a formative assessment strategy to

activate teachers’ thinking about Day 1. Teachers will complete the warm-up task by

creating a graphic organizer. They write the words “formative assessment” in the center

of their paper and draw circles connected to the center that contain facts about what they

learned in yesterday’s session. After 5 minutes, I will use a random name generator app

to display a teacher’s name on the screen. Each teacher chosen will state a fact he wrote,

and if any others have the same fact, they draw an “X” on that circle on their paper. This

technique will require all participants to carefully listen to one another. The activity will

continue for 5 minutes. After, I will review the main feedback from Day 1’s exit slips

about learning targets and share how the feedback gave me insights into their

understanding. After reminding the group of our norms, I will then communicate the

learning targets for the day.

During Day 2’s session, teachers will be introduced to Opportunities to Respond

(OTR) strategies. To begin, I will display five questions for teachers to read and reflect

upon. Questions include “Are there times when many of your students do not participate

when you ask questions to check for understanding? Do you ever have students who you

have no idea what they understand—often for long periods of time? Have you heard right

answers from a few students and felt like everyone was “getting it” only to find out from

a quiz or test they did not understand? Do you often have the same students answer all

the questions and wish you could “hear” from other students? Do you wish that you could

get more students to participate during formative questioning during instruction? These

questions were designed to stimulate teachers’ thinking about their practice and to create
186
interest about today’s session. Teachers will show, by number of fingers, how many

“yes” answers they had to the questions. I will discuss how research has shown that

teachers who use formative assessment often only assess a limited number of students.

Teachers will be asked to turn to a partner and have a conversation about what percentage

of time they collect feedback about all students’ understanding. They will also discuss

reasons they do not collet feedback from all students more often. The reflection activity,

research, and partner discussion will create buy-in for the day’s topic.

During my PowerPoint presentation, I will define OTRs, explain their benefits,

and give an overview of research linking OTR use and increased participation during

formative assessment. Teachers will learn about the main types of OTRs, starting with

verbal and gestural. I will discuss how to use verbal and gestural OTRs during formative

assessment and model strategies with the group. We will agree upon a schoolwide Likert

scale for the fist-to-five gestural strategy (such as one finger means “I do not

understand,” two fingers mean “I understand a little,” etc.) to establish consistency

between classrooms. Teachers can individually create signs with the guidelines to hang

on in their classroom or possibly a staff member will volunteer to create the signs.

Next, I introduce written OTR strategies. I will present information about

response cards: examples of what they look like, what they are used for, and how to

implement them in the classroom. I will follow the same format for presenting about

whiteboards. I allocated time for teachers to create a whiteboard and a set of response

cards. They will use these OTR tools for responding to formative assessment questions

(as I model implementation ideas) and in the role-playing activity during the second half
187
of the session. As I teach a mini science lesson, teachers will have an opportunity to see

how the OTR strategies are used in practice.

My presentation will continue with extended response, the third type of written

OTR strategy. Extended response OTRs require all students to individually write an

answer to an open-ended question to show their understanding; extended responses are

often given as exit slips at the end of the lesson. I will discuss how the wide variety of

strategies that can be used as extended responses may help teachers gain deeper insight

into student understanding. Teachers will also be reminded that they should collect or

review responses to formative assessment tasks so they can use the feedback to adjust

instruction. Because many extend responses are often given as exit slips, and study data

showed teachers did not collect exit slips, I will ask teachers to reflect about two

questions: “What do you usually do with the feedback on the exit slips after you have

students complete them?” and “Is there anything else you could do that would help you

use the feedback to make better instructional adjustments?” I will offer several strategies,

such as grouping students based on their level of understanding, reteaching to address

misunderstandings, starting the next day with a warm-up addressing the concept, and

differentiating lessons. At this time, I will distribute the Written OTRs Extended

Response handout and give teachers time to review the list of strategies and discuss with

their group which OTRs they find useful. Teachers will be asked to develop three

extended response OTR tasks for specific lessons during the first marking period using

their Learning Target Planning Sheet from Day 1 or to create a set of three generic

extended response handouts they can use with any lesson. I will also ask teachers to
188
record on index cards any additional extended response strategies they have successfully

used and share them with the whole group after we reconvene.

For the final activity before the lunch break, the group will watch four video clips

of teachers using OTR strategies in the classroom. They will write their observations and

questions on the Video Observations of OTR Implementation handout. Teachers will be

instructed to watch each of the video clips on the screen and complete two observation

questions. After the four videos, I will use an online group generator app to assign

teachers into groups of three. Teachers will discuss the videos with their group while I

walk around and answer any questions.

After the lunch break, I will present about how to begin implementing formative

assessment OTRs with students. I will explain the seven-step process of incorporating an

OTR strategy (McGlynn & Kelly, 2017) into a lesson and how to adjust instruction based

on the formative assessment results. Teachers will also learn how to establish student

expectations for using OTRs. I will also discuss OTR implementation rates and what

research has shown to be most beneficial for student learning.

To give teachers time to transfer the OTR strategies they learned into practice, I

will have them teach a sample lesson using their new skills. They will meet with their

PLC groups where they will collectively create an eight-minute lesson on a topic they

teach during the first marking period. The group must use the standards to develop clear

learning targets, create an engaging mini-lesson, plan formative questions to check for

student understanding, and incorporate at least three different OTR strategies that they

learned. While working, I will visit PLC groups to provide constructive feedback to help
189
strengthen their formative assessment and OTR practices. After 55 minutes, the whole

group will reconvene, and each PLC will present their mini-lesson to colleagues who will

play the role of students. I expect, after being in both the teacher and student roles, that

teachers will develop a greater understanding of how they can implement OTRs into

practice. After the group presentations, I will discuss creating OTR kits and managing

OTR materials for easy distribution in the classroom. The session will close with a brief

conversation about the PLC OTR lesson planning sessions and an extended response exit

slip where teachers write three sentences using the phrase, “I used to think . . . but now I

know.”

I will begin Day 3 by sharing statements from Day 2’s exit slips as a review of

what participants learned in the previous session. Teachers will then complete a warm-up

activity using an online tool called Survey Monkey so I can discover the digital formative

assessment tools with which they are familiar. Feedback from the survey will be

projected on a screen and displayed anonymously. The instant results will help me

decide which technological OTR tools I will demonstrate later in the session. After

communicating the session’s learning targets, I will present a brief overview of

technological OTRs, their benefits, and how they can be used to formatively assess

students. Teachers will be presented with sample results of student feedback and asked,

“What could the data be telling the teacher?” and “What are some instructional

adjustments the teacher could make to help students understand the correct answer?”

After a discussion about how to adjust instruction to address student understanding

during technological OTRs, I will ask teachers to share classroom management ideas for
190
using technology in their classroom. I will show two videos of teachers using clickers

and Google Forms as formative assessment technological OTRs in their classroom.

Teachers will turn to a partner and discuss how the OTR was used to give all students an

opportunity to respond during formative assessment and to share ideas they have for OTR

implementation in their classrooms. Next, I will offer teachers two choices for a breakout

session where they can learn a technological OTR to quickly assess student

understanding: Option A—clickers and Option B—Google Forms. Teachers will need to

bring their laptops and Learning Target Planning Sheets from Day 1 to the session they

choose. By providing a choice, I will allow teachers to determine which technological

OTR would be most beneficial for them to learn. Both 90-minute sessions will be

presented by district technology coaches. The sessions will include a step-by-step set-up

procedure, a demonstration of how to use the technology in class, an examination of the

feedback data, and an opportunity for teachers to create a formative assessment that they

can implement during their first unit.

When groups reconvene from the break-out sessions, I will share several other

technological OTR tools that can be used during formative assessment such as Kahoot,

Mentimeter, Quizlet Live, Padlet, Socrative, Quizziz, and Plickers. Depending on the

results of the technology warm-up survey at the beginning of the session, I may omit

tools in which most teachers are familiar. Teachers will have an opportunity to use the

digital OTR tools while I demonstrate how each can be embedded into formative

assessment. After teachers have seen each how each of the technology tools can be used

in the classroom, I will give them time to work independently exploring these and other
191
technological OTRs. Each person will be given a Technological OTRs to Check for

Student Understanding list to provide students with opportunities to respond during

formative assessment. Teachers will have time to visit the websites, read about the

features, practice using the applications, and plan ideas of how to incorporate the tools in

their classes during the first marking period.

After the lunch break, there will be several questions displayed asking teachers to

reflect on their current formative assessment warm-up implementation. Questions

include: Do you give warm-ups regularly and purposefully to check for student

understanding? Do you walk around the room and check student answers while they

work on the warm-up task? Do you use warm-ups to gather feedback from all students or

only a few? Do you use the information you receive from warm-ups to inform your

instruction? These questions will prepare teachers for the Think-Pair-Share (Lyman,

1981) Implementation activity. Individually, teachers will think about possible ways they

could implement verbal, gestural, written, or technological OTR strategies during warm-

ups in their classroom for the coming year. My goal is for teachers to consider how they

could incorporate OTRs into their current warm-up practices so they will intentionally

collect more feedback from students to make better instructional adjustments. Teachers

will also reflect on past experiences using any of the OTRs strategies discussed in the

sessions. They will find a partner by matching the symbols written on the back of their

handouts. Once together, partners will discuss their answers and then (a) write three

statements they would like to share from their discussion that their colleagues may find

helpful, and (b) name a possible challenge of implementing a specific OTR strategy and
192
suggest some possible solutions. Partners will have an opportunity to share their

statements and solutions with the whole group. My goal is for teachers to learn from

each other’s experiences and to problem-solve implementation challenges they may face.

The remainder of Day 3 will focus on formative questioning, an instructional

strategy that will help teachers to further uncover student understanding. I will explain

how teachers should use formative questioning during OTR strategies to elicit additional

feedback about student understanding. Topics include planning formative questions to

ask students while implementing OTRs, using questioning techniques after hearing OTR

responses, and balancing low- and high-level questions. By asking more intentional

questions during OTR strategies, teachers can reveal whether students have a surface-

level or a deep understanding of the content. Also, using probing questions after an OTR

strategy can further uncover student thinking and misconceptions. I will model how to

use probing questions to gain more feedback about student understanding by using

response cards.

Teachers will then read Chapter 1 of “Fast Effective Assessment” by Pearsall

(2018), which explains how to become more effective at questioning. Each teacher will

be assigned a number on their handout, and those with like numbers will form a group.

Together, groups will use the Final Word protocol (Expeditionary Learning, 2013) to

discuss what they read. After the reading activity, I will give teachers a formative

assessment about information in the article by using clickers to demonstrate how quickly

this tool can be used to check for understanding. Next, teachers will pair with a colleague

and discuss two questions: How well do you feel you incorporate effective questioning
193
during formative assessment? What questioning strategies do you plan to integrate into

your formative assessment implementation this school year?

During the last segment of Day 3’s session, I will have teachers participate in the

Pair-Share-Move activity where they reflect on five of their most valuable learnings from

the three professional development sessions. They will write each answer on separate

index cards. To begin the activity, teachers will move around the room as music plays,

shaking hands or giving “high-fives.” When the music stops, they will pair with the

closest person. Each partner will choose two of their index cards and take turns

discussing what they wrote; they will give the two cards they read to their partner. When

the music starts, everyone will move around the room again. The process will repeat for

several rounds.

Teachers, after having time to reflect on what they learned, will receive the

Teacher Formative Assessment/OTR Commitment Form. They will write a personal plan

for using learning targets, implementing formative assessment OTRs, and increasing

questioning during and after OTRs so they can gather more feedback about student

understanding. I will collect the plans and make copies for school leaders, department

PLC facilitators, teachers, and myself. Teachers will reflect on these plans periodically

throughout the year to determine their progress. Lastly, I will explain the next steps for

professional development, which is supporting the new learning in PLC groups. For 30

minutes twice a month in their PLCs throughout the school year, teachers will discuss

formative assessment and OTR strategies, set goals, reflect on implementation, exchange

constructive feedback, and observe their colleagues. Day 3’s session will conclude with
194
an online professional development survey using Google Forms. The feedback from the

survey will help me to determine if participants perceived the professional development

sessions as beneficial to their instructional practice so that I can strengthen any future

sessions.

In addition to the 3-day professional development sessions, my project study

includes sustained support by using Hammond’s existing PLC structure. The suggested

PLC agenda and all PLC resources are found in Appendix A. The agenda shows a year-

long schedule for meetings and was developed to provide an ongoing dialogue about the

formative assessment practices teachers learned during the 3-day sessions. PLCs

currently meet for 90 minutes twice a month, and I am proposing 30 minutes each

meeting be dedicated to supporting teachers’ formative assessment practices as outlined

in this project study. From September to May, there are 16 possible meeting times,

which results in a total of eight hours of collaboration available for the project. At the

first meeting in September, the PLC facilitator will discuss department goals for

formative assessment OTR use and have materials available for teachers to create

classroom sets of response cards and whiteboards (if needed). At the second September

meeting, teachers will fill out the PLC Formative Assessment Reflection. During this

self-assessment, I will ask teachers: What are a couple of your class learning targets from

the past week? What formative assessment strategies did you use to check for

understanding of those learning targets? What OTR technique(s) was used to elicit

feedback about student understanding during the formative assessment strategy? What

worked well? Were there any problems or concerns? What did student feedback indicate
195
about student understanding? Because adjusting instruction is a critical component of the

formative assessment process, three additional questions will be focused on how teachers

adjust instruction due to the student feedback they collected during a formative

assessment task: What instructional adjustments were made or will be made as a result of

student feedback from the OTRs? What were the outcomes of any instructional

adjustments you made? How do you know (or would you know) if student understanding

improved after you made an instructional adjustment? After a teacher shares his

reflection with the PLC group, the other members will have an opportunity to provide

constructive feedback or give ideas that may help strengthen their colleagues’ formative

assessment practices. The PLC facilitator will keep all reflection sheets and submit them

to the building principal at the end of each semester. School leaders can use the

reflection sheets to provide evidence of PLC support of this project and to evaluate

growth in teacher formative assessment OTR practices throughout the year, which will

aid in the project evaluation.

Before the October meeting, teachers will be asked to complete the “current level

of performance” section of the PLC Action Plan to Increase OTRs During Formative

Assessment. For this task, teachers will assess the frequency in which they implement an

OTR strategy during class and the rate of their formative assessment questioning during

OTR implementation. At the first October PLC meeting, all teachers will discuss their

current level of performance from their Action Plan with the group. This activity

develops accountability and support among colleagues for transferring the information

they learned in the professional development sessions into practice. Next, teachers will
196
write personal goals for increasing their formative assessment OTR use on the “Plan to

increase OTRs” section of their action plan. Over the next few weeks, everyone will be

expected to execute their action plans. At the second October PLC meeting, teachers will

read Stefl-Mabry’s (2018) article, “Documenting Evidence of Practice: The Power of

Formative Assessment” and discuss the content using the Save the Last Word Protocol.

During this meeting, everyone should also comment on how their action plans are

progressing.

At the first meeting in November, teachers will once again complete the PLC

Formative Assessment Reflection and discuss as a group using the protocol of their

choice. For the next part of their Action Plan to Increase OTRs During Formative

Assessment, teachers will need to connect with a colleague who can observe their

classroom and complete the “monitor progress” section. There are rows for 4 days of

observations provided on the action plan sheet, and PLC groups should determine the

minimum number of observations they wish to achieve. At the second PLC meeting in

November, teachers will discuss the results of their action plan observations while the

other members of the group give constructive feedback, share ideas, and provide

encouragement.

In December, the PLC groups will revisit the Teacher Formative Assessment

OTR Commitments completed during the last professional development session and

discuss how well they are progressing on department and individual formative

assessment OTR goals. Teachers will then take the Teacher Formative Assessment

Practices Survey mid-year assessment (the middle column) which the facilitator will
197
submit, along with the first semester PLC Formative Assessment OTR Reflections, to the

building principal. PLCs in the second semester, January through May, will follow the

same format as the first semester. I have also recommended five books, which are

written on the PLC agenda, to support formative assessment and OTR use.

Roles and Responsibilities

My responsibilities for this project include designing the 3-day professional

development PowerPoint presentations; creating activities, resources, and handouts;

contacting the building principal to arrange the days to present the sessions; and securing

two district technology coaches for the 90-minute break-out sessions on Day 3. I will

facilitate the three sessions and be available for consultation during the school year as

needed. The two technology coaches will deliver a presentation about using clickers and

Google Forms as strategies to give all students opportunities to respond during formative

assessment. They will demonstrate how to set-up the software and use the application as

well as help teachers create a formative assessment to use in their class. Department PLC

facilitators will help provide ongoing support during bi-monthly meetings. Their

responsibilities include following the suggested PLC agenda; using the reflection,

feedback, and action plan tools in meetings; promoting constructive conversations about

formative assessment OTR implementation; and collecting and submitting PLC

reflections, action plans, and teacher surveys at the end of each semester. The PLC

facilitators will be expected to observe OTR instruction, provide feedback, and model

strategies; they will also contact me as needed to answer questions. School leaders have

the role of establishing a schoolwide culture that supports the implementation of


198
formative assessment OTRs in the classroom. Some of their responsibilities include

designating time for the project’s initial 3-day professional development sessions,

allocating at least 30 minutes during PLCs for OTR refection and feedback, providing the

necessary resources for teachers to create and use OTR tools, maintaining building wide

initiatives that promote formative assessment use, holding PLC facilitators accountable

for following the agenda and submitting materials, and reviewing data at the middle and

end of the year to determine how to continue supporting consistent formative assessment

use. The commitment of all people to the roles and responsibilities outlined above may

support the successful implementation of this project.

Project Evaluation Plan

All professional learning should be evaluated on several levels to ensure effective

implementation of strategies and to promote an environment that can positively affect

student achievement (Guskey, 2016). As Guskey, Roy, and von Frank (2014)

determined, one source of evaluative evidence will not provide the data necessary to

determine if professional development has been successful. Similarly, the professional

learning standards of Learning Forward (2013) indicated, “The use of multiple sources of

data offers a balanced and more comprehensive analysis of student, educator, and system

performance than any single type or source of data can” (p. 20). Learning Forward

(2013) suggested that the multiple sources consist of both quantitative and qualitative

data. Professional development evaluation is needed to establish accountability, to check

for progress of implementation, to determine the resulting influence on teaching and

learning, and to make future decisions (Learning Forward, 2017).


199
I have developed a goal-based evaluation plan to determine the project’s success.

The goals of the project include teachers (a) writing clear learning targets to focus their

formative assessment, (b) using OTR strategies during formative assessment to allow a

greater number of students opportunities to respond, and (c) using student feedback from

formative assessment tasks to adjust instruction. A goal-based plan will allow me to

determine if these three project goals were met. The evaluation plan is comprised of both

quantitative and qualitative data. The quantitative component of the evaluation consists

of a teacher survey (see Day 1 section of Appendix A), a student survey, and the PLC

Action Plan to Increase OTRs During Formative Assessment (see PLC section of

Appendix A). The qualitative data used to evaluate the project will be from PLC

Formative Assessment OTR Reflections collected from teachers at the end of each

semester.

A teacher self-assessment survey will be one source of evaluation data for all

three goals. I designed the survey to address the content of the professional development

project. Teachers will complete the pre-assessment section of the Teacher Formative

Assessment Practices Survey (see Day 1 resources in Appendix A) during the first

professional development session. The survey will be given again during the May PLC

meeting as a post-assessment, and the results compared to the pre-assessment. Answers

to the section “Learning Targets,” questions 1 through 6, will be used to evaluate Goal 1.

Answers to the section “Formative Assessment Practices,” questions 7 through 18, will

be used to evaluate Goal 2. Lastly, answers to the section “Student Feedback and

Adjusting Instruction,” questions 19 through 25, will be used to evaluate Goal 3. The
200
target is that 40% of the teacher self-ratings in the corresponding sections will increase at

least one level from the pre-assessment survey to the post-assessment survey.

I will use results from Hammond’s bi-annual TRIPOD student survey as another

evaluation for Goal 2. TRIPOD is a school improvement company that collects and

reports on student perspectives about teaching and learning. School leaders give all

students the TRIPOD survey at the beginning and end of each school year. Several

questions on the survey directly relate to teachers’ use of formative assessment, such as

whether or not students feel their teachers check to see if they understand concepts during

a lesson. If more teachers are regularly using OTR strategies that give students

opportunities to respond during formative assessment, then the number of students

answering positively about their teachers’ formative assessment practices should

increase. The survey answers can be compared over time. For example, in the school

year the project is implemented, fall data from TRIPOD could be compared to spring data

to determine if student perceptions of their teacher’s formative assessment use grew more

favorable. Each question on the TRIPOD survey is assessed as a percentage of the total

students taking the survey, so the quantitative project goal is a 25% increase in the

percentage of students answering positively on questions about their teachers’ use of

formative assessment to check for understanding from the fall survey to the spring

survey. Comparisons could also be made from spring of the implementation year to

spring of the year prior.

The PLC Formative Assessment OTR Reflections and PLC Action Plan to

Increase OTRs During Formative Assessment are other sources of evaluation data for
201
Goals 1, 2, and 3. The reflection sheets address learning targets, OTR implementation,

and using student feedback to adjust instruction. The PLC Action Plans can also be used

to determine if teachers are using OTRs with greater frequency, which OTRs are being

implemented, and increases in the rates of OTR use—all of which align with Goal 2. The

project goal is to have 50% of the teachers show an increase in OTR use and

implementation rate by the end of the school year. These sources may be useful to

evaluate not only the transfer of professional development learning into practice, but also

to provide data about how PLCs support and sustain the project goals.

I also recommend two other sources of evaluation. School leaders could use

components of the formal teacher evaluation rubric as outcomes-based evaluation data

for all three project goals. Hammond’s district uses the 2007 Danielson Framework for

Teaching for formal teacher evaluations. The framework includes two components

regarding formative assessment: Component 1f—Designing Student Assessments and

Component 3d—Using Assessment in Instruction. Component 1f evaluates whether a

teacher aligns formative assessment with clear instructional outcomes or learning targets,

has well-developed strategies for using formative assessment with students, and uses

formative assessment results in planning future instruction. Component 3d measures

whether formative assessment to check for student understanding is absent, occasionally

used, regularly used, or fully integrated into instruction (Danielson, 2007). The evaluator

also considers if the teacher (a) uses effective questioning to elicit evidence of student

understanding and (b) adjusts instruction during class to address misunderstandings based

on student feedback. Teachers can be rated as unsatisfactory, basic, proficient, or


202
distinguished in each component area. Because criteria in two components of the teacher

evaluation tool are addressed in the project, an increase in teacher proficiency levels in

those areas could be used for project evaluation. A possible goal is a 20% increase in the

number of teachers evaluated as proficient or distinguished in Components 1f and 3d

when comparing results from the spring of the year of project implementation to spring of

the year prior.

As another evaluation for all three goals, I recommend that Hammond leaders

focus on formative assessment during their learning walks. Hammond’s Instructional

Leadership Team (ILT) conducts classroom learning walks several times a year to reflect

on topics such as student learning and engagement, teacher instructional strategies and

methods, and student-teacher interactions. Data collected during these non-evaluative

walks can help school leaders quickly gather a snapshot of teaching and learning in the

classroom (Fisher & Frey, 2014b). The ILT group shares impressions and questions,

determines trends, and suggests future professional development. Fisher and Frey

(2014b) outlined the learning walk process:

1. Participants (e.g., leadership team members, administration, and selected

classroom teachers) in the learning walk meet in advance with a facilitator to

review the purpose and expectations of the observations.

2. The group spends a short time in the selected classrooms (15 minutes or less).

3. Participants meet again and reflect on what they noticed and what they

wondered about concerning the classroom observations.


203
4. Teachers on the walk discuss commonalities with their classroom and share

insights.

5. The participants summarize findings (keeping information anonymous) and

share their reflections with staff at a meeting.

These classroom visits are also used to determine if teachers are implementing skills,

strategies, or procedures they learned during professional development. Therefore,

Hammond leaders can use their existing learning walk process to determine if teachers

are implementing new learning from this project. The ILT group can record and reflect

on the components of Goals 1, 2, and 3. The learning walk data can be compared to

previous data to verify progress in implementation or to determine areas where more

instructional support is needed. Data throughout the year should show both an increased

and consistent use of instructional strategies that give all students an opportunity to

respond during formative assessment.

In addition to evaluating project goals, I will ask for an assessment of my project

presentation, activities, and overall learning. Teachers will take an online Google Form

survey (see Appendix A Day 3 for Professional Development Evaluation) at the end of

the Day 3 session so that I can collect feedback about their professional development

experiences. A paper copy will also be available for participants, if preferred (see

Appendix A Day 3 for Professional Development Evaluation: Handout). Teachers will

rate 10 statements on a scale of one to five, with five being the highest. The following

statements are included on the evaluation:

1. The goals of the professional development sessions were clear.


204
2. The presenter was well-organized and supportive.

3. The amount of work time for group activities was appropriate.

4. The sessions were engaging.

5. Activities used to facilitate the professional development experience were

helpful.

6. Materials and handouts supported the professional development experience.

7. The instructional OTR strategies I learned were clearly described and

modeled.

8. The information I learned in the sessions was relevant and valuable.

9. This professional development experience will have a positive effect on my

practice.

10. I left with instructional strategies and ideas that I can immediately implement

in my classroom.

At the end of the survey, I provided a space for teachers to add comments or suggestions.

Data from the Google Forms will be sent to my account as a spreadsheet. I will analyze

the data to understand teacher perceptions of the 3-day professional development sessions

and to determine whether the sessions were successfully implemented. Answers could

also help me improve the presentation for future audiences.

Data from the evaluation sources discussed in this section should give a

comprehensive picture of how successful the project was at helping teachers implement

strategies that provide opportunities for all students to show their understanding during

formative assessment. Giving students more opportunities to respond may allow teachers
205
to collect more feedback about student understanding so teachers can make informed

instructional adjustments to help students meet learning goals. The results of the project

evaluation may aid in the development of plans to support consistent formative

assessment and OTR use in subsequent school years.

Implications Including Social Change

Local Stakeholders

With this project, school leaders at Hammond have an opportunity to support

formative assessment use to check for student understanding and to adjust instruction.

By using strategies that offer more students an opportunity to participate during formative

assessment, teachers can elicit the feedback needed to determine what their students

understand. Therefore, instead of collecting a limited amount of feedback about student

understanding, teachers can gain a comprehensive picture of how well students

comprehend the curricular concepts being taught in class. Accordingly, formative

assessment may no longer mean an opportunity for only a few students to show their

understanding, but rather represent an invitation for all students to share their thinking.

Students, by having increased opportunities to respond during formative assessment, may

more frequently communicate what they do and do not understand to their teachers.

Teachers, by eliciting more student responses, can then make more informed instructional

adjustments. As a result, students can gain the academic support they need to understand

the content and to meet learning goals. Not meeting district and state learning goals have

played a factor in Hammond’s achievement issues including low proficiency ratings on

state tests, high failure rates in classes, high grade retention, and low graduation rates.
206
Therefore, if school leaders implement the project outlined in this study, then they may

support consistent implementation of formative assessment at Hammond. Teachers’

consistent implementation of formative assessment with all students may result in social

change by increasing the overall student achievement at Hammond.

Larger Context

This project could be implemented in elementary, middle, and high schools

throughout the district, state, and country. As research has shown, most teachers collect

limited student feedback during formative assessment, meaning most students are not

assessed throughout the lesson (Duckor & Holmberg, 2017; Fisher & Frey, 2014a;

Pearson, 2018). Therefore, having professional development sessions that could

introduce teachers to effective instructional strategies that offer a greater number of

students opportunities to respond during formative assessment could be beneficial to

many schools. When teachers use OTR strategies to encourage more students to

participate during formative assessment, they can make more informed instructional

adjustments to bridge gaps in students’ understanding. The outcome may be increased

student achievement at the classroom and building levels which, in an era of

accountability, can be very appealing to schools. Although high-stakes assessments

provide much of the data for which schools are held accountable, the classroom-level

formative assessment is where learning is checked and advanced. When teachers

consistently implement formative assessment practices with all students, school leaders

may see an overall increase in student understanding of curricular concepts being taught
207
in classes. The resulting positive social change may be an increase in academic

achievement and a greater number of students who are college and career ready.

Conclusion

Section 3 offered a detailed description of the project that resulted from the

findings of this study. The overall goal of the project is to help teachers consistently

implement formative assessment in a manner that allows them to gain a comprehensive

picture of student understanding so that teachers can adjust their instruction to help

students meet learning goals. A review of the literature showed that researchers

recommend professional development training sessions to introduce and demonstrate new

instructional strategies and that PLCs can be utilized to support teachers as they transfer

new learning into practice. Therefore, the project consists of a 3-day professional

development where teachers can learn strategies to provide all students an opportunity to

show their understanding during formative assessment. Teachers can then collect the

student feedback necessary to make informed instructional adjustments. In addition to

the professional development sessions, the school’s existing PLC structure will be used to

sustain new learning through collaboration, reflection, and feedback. In this section, I

outlined the proposed implementation and evaluation plan for the project, and all

supporting resources can be found in Appendix A. This section concluded with project

implications at the local level and larger context along with positive social change that

may result.

In Section 4, I will discuss the strengths and limitations of the project and

recommend alternative approaches to the local problem. I will describe what I learned
208
from the research and development of the project, present a reflective analysis of my

personal learning and growth during the process, and reflect on the importance of the

work. I will also review the project implications, applications, and recommendations for

future research.
209
Section 4: Reflections and Conclusions

Introduction

The purpose of this qualitative case study project was to examine how teachers

implemented formative assessment to check for student understanding and to adjust

instruction. Data showed that participants elicited a limited number of student responses

during formative assessment. Participants could make more informed instructional

adjustments if they collected greater feedback about student understanding. By

incorporating OTR strategies, teachers can offer a greater number of students

opportunities to respond during formative assessment so they can uncover student

understanding and address misconceptions. In Section 4, I will review the project’s

strengths and limitations and present alternative ways of addressing the study’s problem.

I will describe what I learned during the research and development processes of the

project as well as reflect on my growth and learning as a scholar, practitioner, and project

developer. This section also includes a discussion about the importance of the project

study, its potential to affect social change, and recommendations for future research.

Project Strengths and Limitations

Project Strengths

The strength of this project is its focus on targeted instructional strategies and

techniques that may help teachers consistently implement formative assessment to check

for student understanding and to adjust instruction. A review of the literature showed

that implementing OTR strategies during formative assessment can be a beneficial

instructional practice to gather feedback from all students about their understanding.
210
More importantly, teachers may incorporate OTR strategies into their existing

instructional practices. Teachers can immediately implement new learning about OTR

strategies to increase the formative assessment feedback they receive from students.

Professional development training sessions may be a particularly effective way to deliver

new instructional processes to staff, and when focused on specific strategies, may bring

about school-wide change (Desimone & Garet, 2015). As Kennedy (2016) pointed out,

teachers can consistently and regularly replicate instructional strategies learned during

professional development trainings. In addition, OTR strategies require very few

resources, and the low cost makes implementing OTRs very affordable for schools.

Another strength is that I developed the project using research-based components

of effective professional development and Knowles’ (1973) assumptions about adult

learners. In the professional development, I addressed school and teacher needs,

communicated intended learning goals, provided ample opportunities for active learning

and teacher collaboration, focused on research-based instructional strategies, and planned

ongoing support using existing PLCs. Each professional development session was

thoughtfully crafted with the adult learner in mind: (a) I describe the relevance of the

professional development to teacher work; (b) I provide ample research, citing the

importance of formative assessment, clear learning targets, OTRs strategies, and

formative questioning; (c) I assess teacher prior knowledge and experiences through

activities that allow them time to discuss and share their ideas and skills; (d) I give

teachers multiple opportunities to apply what they learn about OTRs into practice through

independent work, group work, and role-playing; (e) I model formative assessment and
211
OTR strategies throughout the sessions to give examples of implementation techniques;

(f) I provide time for teachers to understand how the new OTR practices could be

integrated into their current classroom instruction; and (g) I create multiple opportunities

for teachers to be active participants throughout the sessions through group tasks, partner

sharing, whole group discussion, problem-solving, role-playing, creating OTR tools,

practicing OTR strategies, playing technological OTR formative assessment games, and

reflecting on their learning. Ongoing professional development in PLCs throughout the

school year will also offer teachers opportunities to collaborate, reflect, and receive

feedback. Aligning effective professional development practices with adult learning

theory may help teachers become knowledgeable about and comfortable with

implementing OTR strategies in their classrooms.

Project Limitations

There are several limitations of the project study. With a relatively high teacher

turnover rate at Hammond, there are often many new teachers. During data collection

midway through the school year, there were three newly hired teachers; several other

teachers were not hired until after the school year started. Therefore, there may be

teachers on staff who do not receive the 3-day training before school starts. Finding time

to conduct a 3-day, 18-hour, professional development for these teachers is not likely.

New teachers may gain some understanding of OTRs during PLC meetings, but they are

not likely to develop the same level of understanding as the teachers who attended the

sessions—especially because of the highly collaborative and active nature of the sessions.

I would recommend that a school leader, PLC facilitator, or teacher adept at


212
implementing OTR strategies provide at least two condensed 1-hour trainings as follows:

Session 1: Writing Clear Learning Targets and Verbal and Gestural OTRs; and Session 2:

Written OTRs and Formative Questioning. The two condensed sessions could be taught

a month after school starts and again in late January. I also recommend that mentor

teachers who are assigned to the new teachers support the formative assessment work

learned during the sessions. The mentor teachers can also explain and model

technological OTR strategies that were originally taught during Day 3’s session. The

overall goal of the condensed sessions and mentoring support should be to help new

teachers fully understand and effectively implement a variety of OTR strategies during

formative assessment tasks so they can collect sufficient feedback about student

understanding to make informed instructional decisions.

Another project limitation may be the time allotted for ongoing support in PLCs.

First, PLC groups at Hammond vary in size. Some departmental PLCs have only two or

three teachers, and others may have five or six. Having 30 minutes allocated to deliver

the PLC agenda provided in the project may be feasible for the smaller PLC groups but

rushed in the larger groups. With five teachers in a group, there would only be 6 minutes

available at meetings for each person to write their reflections, share implementation

concerns and successes, and provide feedback to colleagues. Second, it is likely that PLC

meetings may be canceled during the school year due to unforeseen circumstances.

Because the agenda is developed in a manner that builds on the previous session, missing

a meeting will require the PLC facilitators to make decisions about how to effectively

“catch up” and proceed with the agenda activities. Because PLCs will split their time at
213
each meeting between the project’s and the school’s agendas, PLC groups may become

side-tracked and overlook the 30 minutes allocated for the project agenda to discuss their

formative assessment work. As a preventative measure, I recommend that PLCs allocate

the first 30 minutes of their time to concentrate on the project’s work (using a timer

would be beneficial), and then transfer their attention to the school agenda.

Recommendations for Alternative Approaches

The problem, as described in Section 1, involved the inconsistent use of formative

assessment at Hammond High School. This local problem could have been addressed in

several ways. I could have examined how teachers of different content areas

implemented formative assessment or how formative assessment practices of veteran

teachers and new teachers compared. Another way to approach the problem in this study

would have been to investigate how teaching styles informed teacher formative

assessment use. Additionally, I could have designed a mixed methods study. Survey

results may have been collected from participants in addition to data collected from

interviews and observations. The survey would have allowed me to determine teacher

perspectives of their formative assessment use and background knowledge they had about

this instructional practice.

I could have also applied an alternative approach to address the study results,

which found that teachers collected limited formative assessment feedback about student

understanding. The project could have focused solely on written formative assessment

tasks that might have allowed teachers to collect feedback from a greater number of

students through asking extended response questions. Because most participants in the
214
study implemented warm-ups and exit slips, a project could have been directed at

individual written formative assessment strategies given at the beginning and end of

classes. Extended response OTRs may be beneficial to use as warm-ups and exit slips so

that teachers can gain a deep understanding of student knowledge before and after a

lesson. A project focused on one type of OTR—written extended response—could have

eliminated the need for a comprehensive 3-day professional development and might have

allowed the professional development to be conducted solely during PLC time.

Scholarship

As a result of my project study, I have developed a better understanding of

scholarship and the important role it plays in advancing the field of education.

Scholarship reveals a passion for learning that sustains effective educational practices.

As professionals, topics of interest or problems of practice should be pursued in a

methodical manner to produce reliable results that can be shared with peers. Although I

have always appreciated reading scholarly works in my pursuit of professional growth, I

had not considered being a scholarly contributor before this study. Knowing that I can

contribute to my profession on a scholarly level to positively affect social change is one

of the many benefits I have gained from attending Walden. As scholars continue to build

upon or replicate their colleagues’ research, data accumulates and knowledge expands. I

have learned through my project study that it is important for educators to positively

contribute to both their local school community and their profession. Educators must not

only be actively involved in classrooms or local schools, but also be engaged in a larger

context. Fortunately, the Internet has allowed scholarly work to be accessible around the
215
globe, making the impact of scholarship far-reaching. Publishing a project study from

which other colleagues can learn is a thrilling prospect. I realize that my scholarly work

will not end once I complete my project study and receive my degree. Rather, this

doctoral journey was just the beginning of a life-long pursuit to continue to produce

scholarly works that may help improve upon instructional practices and further advance

the field of education.

Project Development and Evaluation

As I developed the project for this study, I gained important knowledge that

applies to my work as a teacher leader. When planning professional development

activities, the work should align with school priorities and match school and teacher

needs. Needs can be uncovered by collecting and analyzing data related to a specific

educational problem. It is necessary to find research-based programs, strategies, or

techniques to address any found needs and to help close the gap between current practices

and desired outcomes. To increase the probability of successful implementation of

research-based practices, several factors should be considered: the components of

effective professional learning, adult learning theory, needed and existing supports, and

available resources.

While creating the professional development agendas, session presentations, and

teacher resources, I realized the process mirrored that of effective lesson planning. I

began the project by identifying the desired result, which was to help teachers

consistently provide opportunities for all students to respond during formative assessment

so they could make informed instructional adjustments. I then established clear and
216
measurable goals that I communicated in the sessions as learning targets so that teachers

understood the purpose of each day’s work. Next, I planned the instruction and learning

experiences needed to teach the new strategies, skills, and processes. I also used

assessment throughout the sessions to determine prior knowledge, to check for teacher

understanding, to make instructional decisions, and to evaluate learning. My

presentations exhibited a logical flow of concepts integrated with instructional modeling,

meaningful activities, thoughtful conversations, and time for regular reflection.

An important factor for the success of any professional learning is sustainability.

Too often, professional development is designed in a manner that only contributes to

short-term instructional changes; it fails to address the supports needed for long-term

transformation (Desimone & Garet, 2015). Ongoing support is necessary to address

teachers’ needs as they attempt to transfer new instructional learning into practice. If

professional development is sustained, then there is “a greater chance for transforming

teaching practices and student learning” (Darling-Hammond et al., 2017, p.15).

Therefore, I added using PLCs to support the formative assessment OTR process taught

in the 3-day sessions. PLC facilitators may promote sustainability of the project study

content by using the allotted PLC time to allow teachers to share and to reflect upon how

they use OTRs so they can collect the necessary information during formative assessment

to make informed instructional adjustments. The support of mentors may also help

promote sustained learning, especially with teachers who may have missed the initial

sessions. As I continue to develop professional learning for educators, I will ensure that
217
it is sustainable by allowing adequate support and ample time for teachers to adopt new

instructional practices.

I also realized that evaluation is an essential component of professional learning.

Checking for teacher understanding during professional development helps to uncover

any confusions or misconceptions. Results of formative assessment used throughout my

sessions can help me adjust my professional learning to meet teacher needs. Assessments

can be formal, such as the project’s Evaluation of Formative Assessment Survey and exit

slips, or more informal, such as “thumbs up” gestures or choral responses during the

presentation. Teachers can also self-evaluate through surveys, discussions, and

reflections. I have incorporated opportunities throughout my sessions for teachers to

participate in evaluative activities. Project evaluations should be created to determine if

professional development was successful. Evaluations should be multi-faceted and not

based on one source. Gathering quantitative and qualitative feedback strengthens the

evaluation. Evaluations also need to be aligned to the professional development learning

goals and used to add instructional support, revise professional development, or plan

future professional learning opportunities. Evaluations, along with monitoring and

sustained support, are key to implementing a successful professional development plan

and may help teachers transfer new learning into practice.

Leadership and Change

Throughout my project study and time at Walden, I learned what qualities of

effective leadership were necessary to bring about change. Strong leaders can promote a

vision and plan that can transform instructional practices and positively affect student
218
outcomes. These leaders have clear goals based on data and research best practices to

support those goals. When goals involve modifying or shifting instructional practices,

leaders can provide targeted professional development. Furthermore, when leaders share

the purpose for meaningful professional development with staff and have evidence to

support a need for change, they create buy-in that encourages teachers to take ownership

of their learning and to be open and committed to change.

I learned that work from professional development can be transferred and

incorporated into classroom practices through mutual trust and regular collaboration.

Implementing professional development that intends to alter teachers’ instructional

practices and results in schoolwide change requires a leader who is supportive, attentive,

persistent, and motivating (The Wallace Foundation, 2013). Effective leaders know that

for any professional learning to be successfully implemented, they must plan how to

sustain the work. Ongoing support embedded throughout the school year will allow

leaders to monitor implementation, evaluate progress, and determine areas where

additional supports are needed. If leaders do not carefully consider all these processes as

part of professional development, the probability of newly learned practices resulting in

lasting change is minimal.

I also learned that leaders must engage with parents and community members to

be transparent about new initiatives and instructional processes aimed at improving

student outcomes. Effective leadership, responsive teachers, and support from parents

and community stakeholders may greatly improve the likelihood that initiatives aimed to

advance student learning will result in sustained change. As I progressed throughout my


219
doctoral journey, I came to appreciate the role I could play as a leader for change. I had

an opportunity to learn and practice skills of effective leadership during my project study

as I addressed a current problem of practice at my school. I look forward to using and

developing my leadership skills in other educational settings.

I believe the leadership I display while presenting and supporting the professional

development at Hammond may be a factor for the successful implementation of OTR

practices into the classroom. I have created a professional development that may support

consistent formative assessment use to provide all students an opportunity to show their

understanding. Through my passion, encouragement, and support, I hope to motivate

teachers to improve upon their formative assessment practices. I also plan to share my

work with the district school board and present my findings at a Hammond school

meeting open to the public. With more transparency, I may gain additional support to

conduct professional development at other schools.

Reflection of Self as Scholar

Though my work at Walden, I have learned much about being a scholar. During

the project study process, I quickly realized how important resilience was for completing

my doctorate. Progress was sometimes slow and considerable patience was needed,

especially during the prospectus stage as I attempted to gain approval for my study.

Being able to clearly articulate the problem, rationale, and significance of the study at

times seemed to be an insurmountable task. However, through persistence and the ability

to accept and act upon constructive feedback from my committee, I was able to overcome

obstacles and to progress through the multiple stages of the project study.
220
Being a scholar has meant consistent growth and reflection. Working through

numerous drafts and revisions helped me to become more precise in my thinking and

writing and to develop my scholarly voice. I also grew in my knowledge and use of

scholarly practices. During my pursuit of professional learning in the past, I was solely

focused on the results of research studies; reading educational articles that discussed

applications of study findings. I rarely read about study methodology, strengths and

weaknesses, biases, validity, reliability, and transferability of findings. I now understand

all aspects of the research process as well as the need to critically analyze studies. Before

my time at Walden, I had never conducted a valid research study. As I complete my

doctorate, I now have experience with the rigor of designing and conducting a study and

have developed a deep appreciation for research. I see myself conducting research in the

future and continuing to make valid contributions to the educational field.

When I set my sight on an educational goal, I have always had tremendous

tenacity; however, I accepted a new level of challenge when I decided to pursue my

doctorate. The work at Walden was demanding and rigorous, as work at this level should

be. I learned to value the struggle and appreciate even the smallest step forward. As a

result, my experiences and growth as a scholar have given me the confidence to pursue

opportunities where I can initiate positive change within the educational community. I

also look forward to conducting additional research and publishing scholarly writings. I

have been an avid learner within my field, constantly seeking ways to improve as a

professional and to stay current on best practices and educational trends. Now I feel the
221
need to not only be a consumer of research but to also be a scholarly contributor from

which other educators can draw resource.

Reflection of Self as Practitioner

My deep commitment to quality education for all students and the desire to

advance in my profession had led me to pursue a doctorate in curriculum, instruction, and

assessment. As a result of my doctoral journey, I have grown as a practitioner. The skills

and knowledge I developed throughout my doctoral process have given me the

confidence to seek new educational prospects. I have held several leadership roles as a

teacher and was given an opportunity to transfer into an administrative position several

years ago; however, I wanted to remain in the classroom. As I progressed through my

courses and project study at Walden, I began to desire a position that would allow me to

affect positive change beyond my classroom. I recently applied for, interviewed, and

accepted a leadership position at a local school. My new position as an

Instructional/Data Coach requires me to use many skills that I developed during my

project study. For example, during instructional coaching, I collect and analyze data,

research and model best practices, and support teachers as they implement the new

instructional processes. I conduct classroom observations, interview students and

teachers, and examine assessment data. Triangulating data allows me to have evidence-

based conversations with teachers aimed to improve practice. In my new role, I have an

exciting opportunity to help improve student learning in the school. Many of my other

job duties also directly relate to my work on the project study: developing trust with the

staff, using data to uncover areas of focus for school improvement, creating and
222
delivering professional development, supporting and sustaining new learning, monitoring

progress, and evaluating professional learning and student growth. The strong writing

skills, analytical and critical thinking, adaptability, self-reflection, and tenacity I learned

during my project study are extremely beneficial as a practitioner.

In addition to applying my skills about the research process, I incorporate my

extensive knowledge about formative assessment into my work. Formative assessment is

a very current, relevant, and necessary topic to address with educators. If educators

desire to increase student achievement, then teachers must consistently check student

understanding of learning goals so they can quickly address gaps in student learning. In

my current educational role, I regularly support formative assessment implementation

and using student feedback to adjust instruction. I also promote the use of OTR

strategies, which give all students an opportunity to respond during formative assessment,

to help teachers collect the necessary feedback to make informed instructional decisions.

Without a doubt, all the knowledge, skills, and personal growth from my work at Walden

are invaluable as a practitioner.

Reflection of Self as Project Developer

I have delivered professional development on many occasions in the past and

enjoy the process of creating and presenting educational learning sessions. I am

extremely thorough in my instructional planning and consider aspects of learning such as

relevance, engagement, collaboration, and reflection. However, during the research and

development of this project, I came to a greater understanding of how the content of

professional development should be determined. My previous presentations at


223
conferences were not necessarily based on a school’s need, but rather on instructional

practices that I wanted to share with other teachers. I came to appreciate how data are

collected and analyzed to determine an instructional need, and how research must support

practices that address that need. I also learned about adult learning theory and recognize

the necessity to incorporate activities that support adult learners into professional learning

experiences. A few aspects of adult learning that I newly considered were providing

opportunities for teachers to share their experiences, giving tasks where teachers

collaboratively problem-solved, allocating time for teachers to immediately apply new

knowledge through role-playing, and offering teachers a choice to personalize their

learning. Additionally, I recognize the importance of establishing a process to monitor

and support new learning. My presentations have usually consisted of one-time

workshops. Unfortunately, research shows one-time workshops with no support are not

an effective form of professional development and likely will not lead to successful

implementation of new instructional practices. Professional development meant to cause

lasting change in schools must be sustainable.

I had a unique opportunity to reflect on my learning as a project developer when I

interviewed for my current position. Many questions the interviewers asked me were

directly related to the work I had recently completed for my project. When I was invited

to discuss my ability to develop sustained professional development for teachers, I could

not help but confidently smile. I began explaining that professional development should

be determined by analyzing reliable data to address a specific instructional need. I then

outlined my process of incorporating effective professional development components and


224
adult learning theory into the training sessions, providing ongoing support, and using

multiple sources of evaluation to determine effectiveness. My educational experiences

and work at Walden resulted in a job offer. In my new position, I will continue to

conduct research and analyze data so I can create professional learning opportunities for

teachers that may help improve their instructional practices and ultimately increase

student achievement.

Reflection on Importance of the Work

Regularly checking all students for their understanding of content learning goals

is crucial to helping students succeed in school. Unfortunately, research has shown that

most students’ understandings are left unchecked (Fisher & Frey, 2014a). My findings

revealed that teachers at Hammond regularly gave students formative assessment tasks

and asked formative questions; however, they only checked the understanding of a few

students. The same students often answered most of the formative questions, while the

other students sat passively. During written formative assessment tasks, such as the

warm-ups and exit slips, many students participated; however, teachers did not collect

student feedback about their understanding. Without the deliberate review of what

students understand, a teacher cannot determine the proper next steps in instruction to

help bridge any learning gaps. The result is teachers realizing that students are

academically struggling after a summative assessment.

As a result of study findings, I concentrated my project on consistent

implementation of formative assessment to regularly check from understanding from all

students. With adequate formative feedback, teachers can make informed instructional
225
adjustments so misunderstandings can be addressed, concepts can be re-explained, and

lessons can be modified. Accordingly, students can meet learning goals. The project

outlined in this study, using OTRs during formative assessment, is an important

instructional strategy that allows teachers to uncover and quickly determine what all their

students do and do not understand. As student learning needs are addressed from regular

checks for understanding in all classrooms, overall student achievement may increase. In

a school that has consistently struggled with student achievement, increased academic

outcomes could result in positive social change. More students may have the credits

needed each year to be promoted to the next grade level; not falling behind in credits may

result in fewer students dropping out and more students graduating. Society suffers when

students do not graduate or are not prepared for a career after graduation because they did

not understand the concepts taught in their classes over the years. In a high-needs school

with a large number of students at or below poverty level, having a solid educational

background is extremely important for post-secondary success. There was a wide

achievement gap between students at Hammond and students in surrounding private and

award-winning schools. This project has the potential to help more students understand

the concepts being taught in their classes, have the credits necessary to graduate from

high school, and be more prepared for their futures. I hope leaders at the district will

recognize my work, understand the benefits of teachers consistently implementing

formative assessment with all students, and invite me to share my presentation in their

other schools. I truly believe that the project resulting from my research has the potential

to bring about positive change in the local district and the community.
226
Implications, Applications, and Directions for Future Research

Research has continuously shown formative assessment to be a critical component

for teaching and learning. Unfortunately, teachers inconsistently implement formative

assessment in schools across the nation (Box et al., 2015; Fisher & Frey, 2014a; Popham,

2014; Wylie & Lyon, 2015). Study findings revealed that teachers at a local urban school

inconsistently implemented formative assessment by only gathering limited feedback

about student understanding. Most teachers use the IRE model to elicit answers from

students, which only allows one or two students to give responses during formative

questioning. When teachers do not understand what all their students know or do not

know, they make instructional adjustments based on the responses of only a few students.

Consequently, teachers may not address the misunderstandings of most of the class. As a

result, students do not meet district and state learning goals and student achievement

suffers. Conversely, if teachers used formative assessment consistently in their classes,

and they implemented instructional strategies that gave all students an opportunity to

respond during formative assessment, then teachers would have a clear picture of student

understanding. The clarity would allow teachers to make informed instructional

adjustments that would benefit all students academically.

Future research could enhance the results of this study. Additional research

conducting a similar case study in multiple contexts could add insights into the study

findings. The local school used in the study was a large, urban, high-need, low-

performing high school. It would be interesting to investigate how teachers used

formative assessment at the high-performing, nationally-rated high school in the same


227
district. Although study participants used formative assessment regularly, they did not

elicit responses from most students; students predominantly sat passively during

formative questioning. Therefore, a study could be conducted to determine if there were

similar findings at a high-performing school.

A descriptive study could be conducted where teachers who had been trained in

formative assessment could be observed to determine if they collected more student

feedback than teachers who had no formal training. The one participant who elicited the

most student feedback shared that he had received training in formative assessment

several years prior which inspired him to use this instructional practice in his classroom.

Also, school data showed that 40% of the participants had been teaching five or less

years. Perhaps a descriptive study could be conducted on the amount and level of

training pre-service teachers receive on formative assessment, and if they were trained, to

what extent were the practices they learned being implemented in their classrooms.

Four participants had stated that student behavior problems and classroom

management inhibited them from implementing formative assessment with more fidelity.

Using OTR strategies in the classroom has been shown to decrease problematic student

behaviors (Haydon et al., 2013; Messenger et al., 2017; Tincani & Twyman, 2016). A

case study could be conducted that would investigate teachers’ perceptions of student

behavior problems after a year of consistently implementing OTR strategies during

formative assessment. In addition, there are far fewer studies which investigate student

perspectives of formative assessment than teacher perspectives. An interesting case study

would be to interview or survey students before and after their teacher began regularly
228
implementing formative assessment OTRs to check for understanding. Research

questions could focus on if students found themselves more actively involved in

formative assessment tasks, which OTR strategies the students participated in the most

and why, and if students felt they understood concepts better (or received better grades)

in classes where their teacher used OTRs during formative assessment.

Future experimental studies could determine what effect consistent OTR

implementation had on student achievement at both the classroom and building level.

The research could be conducted with all students or with a subgroup. For example,

there was a high population of students with learning disabilities in the classrooms at the

local school. As a subgroup, these students have struggled academically, thus creating an

achievement gap. Studies have shown that using OTR strategies during formative

assessment greatly supports learning-disabled (LD) students (Messenger et al., 2017;

Tincani & Twyman, 2016). Therefore, an experimental study could be conducted to

determine if this subgroup improved academically by comparing grades of LD students in

classes where the teachers regularly implemented OTR strategies with LD students in

classes where teachers did not use OTR strategies.

In addition to conducting further studies that add to the body of literature about

formative assessment, I recommend that school leaders consider other components of the

formative assessment process. In this study, I chose to narrow the formative assessment

process by focusing on only two practices—checking for student understanding and using

the feedback to adjust instruction. The formative assessment process, however, also

involves the teacher providing descriptive feedback to students about their work, students
229
using teacher feedback to reflect upon and improve their learning, and students

collaborating as resources to support the learning process. These formative assessment

practices have positive outcomes on student learning (Duckor & Holmberg, 2017;

Wiliam, 2018). I recommend that school leaders research and develop a plan for teachers

to incorporate these additional components of the formative assessment process to

continue strengthening overall formative assessment implementation.

Conclusion

The purpose of this study was to examine how teachers implemented formative

assessment to check for student understanding and to adjust instruction. Data showed

that participants inconsistently implemented formative assessment; they only collected

limited feedback about student understanding. Consequently, participants were unable to

make informed instructional adjustments that reflected current student understanding. I

developed a professional development project to help teachers at Hammond gather more

student feedback during formative assessment. The project consisted of three

professional development sessions that focused on formative assessment and the need to

check for student understanding, write and communicate clear learning targets,

implement the four types of OTR strategies to collect formative feedback from all

students, and use questioning techniques to probe student thinking during OTRs. Time in

existing PLCs will be used throughout the school year to provide the ongoing support

needed for teachers to effectively transfer new learning into practice.

Research showed that implementing OTR strategies, which provide all students

with opportunities to respond during formative assessment, helped teachers to collect the
230
necessary feedback about student understanding. With adequate feedback, teachers can

uncover misunderstandings and adjust their instruction to help students meet learning

goals. With more students meeting district and state learning goals in their classes,

student achievement has the potential to increase. The result of increased student

achievement in classes may be an increased number of students passing classes and,

ultimately, earning the credits required to graduate. Overall, students may leave school

with a greater understanding of the topics they studied and be more prepared for their

futures.
231
References

Akpan, J. P., Notar, C. E., & Padgett, S. A. (2012). Formative assessment: A basic

foundation for teaching and learning. National Teacher Education Journal, 5(1),

83-97. Retrieved from https://ntejournal.com/

Ali, I., & Iqbal, H. M. (2013). Effect of formative assessment on students’ achievement

in science. World Applied Sciences Journal, 26(5), 677-687.

doi:10.5829/idosi.wasj.2013.26.05.1622

Anderson, L. W., Krathwohl, D. R. (Eds.) (2001). A taxonomy for learning, teaching, and

assessing: A revision of Bloom's Taxonomy of educational objectives (Complete

ed.). New York, NY: Longman.

Andersson, C., & Palm, T. (2017). The impact of formative assessment of student

achievement: A study of the effects of changes to classroom practices after a

comprehensive professional development programme. Learning and Instruction,

49, 92-100. doi:10.1016/j.learninstruc.2016.12.006

Antoniou, P., & James, M. (2014). Exploring formative assessment in primary schools

classrooms: Developing a framework of actions and strategies. Educational

Assessment, Evaluation and Accountability, 26, 153-176. doi:10.1007/s11092-

013-9188-4

Ateh, C. M. (2015). Science teachers’ elicitation practices: Insights for formative

assessment. Educational Assessment, 20(2), 112-131.

doi:10.1080/10627197.2015.1028619
232
Baird, J., Hopfenbeck, T. N., Newton, P., Stobart, G., & Steen-Utheim, A. (2014). State

of the field review: Assessment and learning. Oxford University Centre for

Educational Assessment Report. Retrieved from https://www.nafsa.org/

Bayar, A. (2014). The components of effective professional development activities in

terms of teachers’ perspective. International Online Journal of Educational

Sciences, 6(2), 319-327. Retrieved from ERIC database. (ED552871)

Bellert, A. (2015). Effective re-teaching. Australian Journal of Learning Difficulties,

20(2), 163-183. doi:10.1080/19404158.2015.1089917

Bennett, R. E. (2014, November). Preparing for the future: What educational assessment

must do. Teachers College Record, 116, 1-18. Retrieved from ERIC database.

(EJ1040208)

Birenbaum, M., DeLuca, C., Earl, L., Heritage, M., Klenowski, V. Looney, A. . . . Wyatt-

Smith, C. (2015). International trends in the implementation of assessment for

learning: Implications for policy and practice. Policy Futures in Education, 13,

117-140. doi:10.1177/1478210314566733

Black, P. (2015, February). Formative assessment—an optimistic but incomplete vision.

Assessment in Education: Principles, Policy & Practice, 37-41.

doi:10.1080/0969594X.2014.999643

Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assessment in

Education, 5(1), 7-71. doi:10.1080/0969595980050102


233
Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through

classroom assessment. Phi Delta Kappan, 80(2), 139-148.

doi:10.1177/003172171009200119

Black, P., & Wiliam, D. (2009). Developing a theory of formative assessment.

Educational Assessment, Evaluation, and Accountability, 1(1), 1-39.

doi:10.1007/s11092-008-9068-5

Bloom, B. S. (1969). Some theoretical issues relating to educational evaluation. In R. W.

Tyler (Ed.), Educational evaluation: New roles, new means. The 68th yearbook of

the National Society for the Study of Education (Part II), 68(2), 26-50. Chicago,

IL: University of Chicago Press.

Box, C., Skoog, G., & Dabbs, J. M. (2015). A case study of teacher personal practice

assessment theories and complexities of implementing formative assessment.

American Educational Research Journal, 52(5), 956-983.

doi:10.3102/0002831215587754

Brink, M., & Bartz, D. (2017). Effective use of formative assessment by high school

teachers. Practical Assessment, Research & Evaluation, 22(8), 1-10. Retrieved

from ERIC database. (EJ1160662)

Brookhart, S.M., & Moss, C. M. (2014). Learning targets on parade. Instruction that

Sticks, 72(2), 28-33. Retrieved from http://www.ascd.org

Brown, C., & Militello, M. (2016). Principal’s perceptions of effective professional

development in schools. Journal of Educational Administration, 54(6), 703-726.

doi:10.1108/JEA-09-2014-0109
234
Bulunuz, N., Bulunuz, M., & Peker, H. (2014). Effects of formative assessment probes

integrated in extra-curricular hands-on science: Middle school students’

understanding. Journal of Baltic Science Education, 13(2), 243-258. Retrieved

from http://www.scientiasocialis.lt/jbse/

Cakiroglu, O. (2014). Effects of preprinted response cards on rates of academic response,

opportunities to respond, and correct academic responses of students with mild

intellectual disability. Journal of Intellectual and Developmental Disability,

39(1), 73-85. doi:10.3109/13668250.2013.844777

Center on Innovations in Learning. (2016). Guidelines and suggestions for choral

responding. Retrieved from http://www.centeril.org

Chan, P., Konran, M., Gonzalez, V., Peters, M.T., & Ressa, V. A. (2014). The critical

role of feedback in formative instructional practices. Intervention in School and

Clinic, 50(2), 96-104. doi:10.1177/1053451214536044

Chappuis, J. (2015). Seven strategies of assessment for learning (2nd ed). Upper Saddle

River, N.J: Pearson Education.

Chappuis, S., & Stiggins, R. J. (2002). Classroom assessment for learning. Educational

Leadership, 60(1), 40-43. Retrieved from https://www.ascd.org

Chroinín, D., & Cosgrave, C. (2013). Implementing formative assessment in primary

physical education: Teacher perspectives and experiences. Physical Education

and Sport Pedagogy, 18(2), 219-233. doi:10.1080/17408989.2012.666787


235
Cisterna, D., & Gotwals, A. W. (2018). Enactment of ongoing formative assessment :

Challenges and opportunities for professional development and practice. Journal

of Science Teacher Education, 29(3), 200-222.

doi:10.1080/1046560X.2018.1432227

Clark, I. (2012a). Formative assessment: Assessment is for self-regulated learning.

Educational Psychology Review, 24(2), 205-249. doi:10.1007/s10648-011-9191-6

Clark, I. (2012b). Formative assessment: A systematic and artistic process of instruction

for supporting school and lifelong learning. Canadian Journal of Education,

35(2), 24-40. Retrieved from http://cje-rce.ca

Clark, I. (2015). Formative assessment: Translating high-level curriculum principles into

classroom practice. Curriculum Journal, 26(1), 91-114.

doi:10.1080/09585176.2014.990911

Clarke, L. S., Haydon, T., Bauer, A., & Epperly, A. C. (2016). Inclusion of students with

an intellectual disability in the general education classroom with the use of

response cards. Preventing School Failure, 60(1), 35-42.

doi:10.1080/1045988X.2014.966801

Conderman, G., & Hedin, L. (2012). Classroom assessments that inform instruction.

Kappa Delta Phi Record, 48, 162-168. doi:10.1080/00228958.2012.733964

Cornelius, K. E. (2014). Formative assessment made easy. Teaching Exceptional

Children, 47(2), 112-118. doi:10.1177/0040059914553204


236
Council of Chief State School Officers (CCSSO). (2008). Attributes of effective formative

assessment. Paper prepared for the Formative Assessment for Teachers and

Students (FAST). Washington, D.C. Retrieved from

www.ccsso.org/documents/2008/attributes_of_effective_2008.pdf

Creswell, J. W. (2013). Qualitative inquiry & research design (3rd ed.). Thousand Oaks,

CA: Sage Publications.

Crossouard, B., & Pryor, J. (2012). How theory matters: Formative assessment theory

and practices and their different relations to education. Studies in Philosophy and

Education, 31(3), 251-263. doi:10.1007/s11217-012-9296-5

Curry, K. A., Mwavita, M., Holter, A., & Harris, E. (2016). Getting assessment right at

the classroom level: Using formative assessment for decision making.

Educational Assessment, Evaluation, and Accountability, 28(1), 89-104.

doi:10.1007/s11092-015-9226-5

Danielson, C. (2007). Enhancing professional practice: A framework for teaching (2nd

ed.). Alexandria, VA: Association for Supervision and Curriculum Development.

Darling-Hammond, L., Hyler, M. E., & Gardner, M. (2017). Effective teacher

professional development. Palo Alto, CA: Learning Policy Institute. Retrieved

from https://learningpolicyinstitute.org

Dehdary, N. (2017). A look into a professional learning community. Journal of Language

Teaching and Research, 8(4), 645-654. doi:10.17507/jltr.0804.02


237
DeLuca, C., & Bellara, A. (2013). The current state of assessment education: Aligning

policy, standards and teacher education curriculum. Journal of Teacher

Education, 64(4), 356-372. doi:10.1177/0022487113488144

Desimone, L. M., & Garet, M. S. (2015). Best practices in teachers’ professional

development in the United States. Psychology, Society, & Education, 7(3), 252-

263. doi:10.25115/psye.v7i3.515

Dixson, D. D., Worrell, F. C. (2016). Formative and summative assessment in the

classroom. Psychological Science at Work in Schools and Education, 55(2), 153-

59. doi:10.1080/00405841.2016.1148989

Duckor, B. (2014). Formative assessment in seven good moves. Educational Leadership,

71(6), 28-32. Retrieved from http://www.ascd.org

Duckor, B., & Holmberg, C. (2017). Mastering formative assessment moves: 7 high

leverage practices to advance student learning. Alexandria, VA: Association for

Supervision and Curriculum Development.

Dunn, K. E., Airola, D. T., Lo, W., & Garrison, M. (2012). What teachers think about

what they can do with data: Development and validation of the data-driven

decision-making efficacy and anxiety inventory. Contemporary Educational

Psychology, 38, 87-98. doi:10.1016/j.cedpsych.2012.11.002

Dunn, K. E., & Mulvenon, S. (2009). A critical review of research on formative

assessment: The limited scientific evidence of the impact of formative. Practical

Assessment, Research and Evaluation, 14(7), 1-11. Retrieved from

https://pareonline.net/
238
Earl, L. (2013). Assessment as learning: Using classroom assessment to maximize student

learning. Thousand Oaks, CA: SAGE Publications.

Earley, P., & Porritt, V. (2014). Evaluating the impact of professional development: the

need for a student-focused approach. Professional Development in Education,

40(1), 112-129. doi:10.1080/19415257.2013.798741

Emmer, E. T., & Sabornie, E. J. (Eds.). (2015). Handbook of classroom management (2nd

ed.). New York, NY: Routledge.

Filsecker, M., & Kerres, M. (2012). Repositioning formative assessment from an

educational assessment perspective: A response to Dunn & Mulvenon (2009).

Practical Assessment, Research & Evaluation, 17(16), 1-9. Retrieved from

http://pareonline.net

Fisher, D., & Frey, N. (2014a). Checking for understanding: Formative assessment

techniques for your classroom (2nd ed.). Alexandria, VA: Association for

Supervision and Curriculum Development.

Fisher, D., & Frey, N. (2014b, January). Using teacher learning walks to improve

instruction. Principal Leadership, 14(5), 58-61. Retrieved from

https://www.nassp.org

Fuller, J. S., & Dawson, K. M. (2017). Student response systems for formative

assessment: Literature-based strategies and findings from a middle school

implementation. Contemporary Educational Technology, 8(4), 370-389.

Retrieved from ERIC database. (EJ1158166)


239
Furtak, E., Kiemer, K., Circi, R., Swanson, R., de Leon, V., Morrison, D., & Heredia, S.

(2016). Teachers’ formative assessment abilities and their relationship to student

learning: Findings from a four-year intervention study. Instructional Science, 44,

267-291. doi:10.1007/s11251-016-9371-3

Glickman, C., Gordon, S., and Ross-Gordon, J. (2007). Supervision and instructional

leadership: A development approach. New York: Pearson.

Great Schools Dashboard. (2016). http://www.greatschools.org/

Guskey, T. R. (2014). Planning professional learning. Educational Leadership, 71(8), 10-

16. Retrieved from http://www.ascd.org

Guskey, T. R. (2016, February). Gauge impact with 5 levels of data. Journal of Staff

Development, 37(1), 32-37. Retrieved from ERIC database. (EJ1100399)

Guskey, T. R. (2017). Where do you want to get to? Effective professional learning

begins with a clear destination in mind. Learning Professional, 38(2), 32-37.

Retrieved from https://learningforward.org

Guskey, T.R., Roy, P., & von Frank, V. (2014). Reaching the highest standard in

professional learning: Data. Thousand Oaks, CA: Corwin Press & Learning

Forward.

Hadar, L. L., & Brody, D. L. (2016). Professional development for teacher educators in

the communal context: Factors which promote and hinder learning. In B. De

Wever, R. Vanderlinde, M. Tuytens, & A. Aelterman (Eds.), Professional

learning in education: Challenges for teacher educators, teachers, and student

researchers (pp. 57-78). Gent: Academia Press.


240
Hanover Research. (2014). The impact of formative assessment and learning intention on

student achievement. Retrieved from http://www.hanoverresearch.com

Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. New

York, NY: Routledge.

Havnes, A., Smith, K., Dysthe, O., & Ludvigsen, K. (2012). Formative assessment and

feedback: Making learning visible. Studies in Educational Evaluation, 38(1), 21-

27. doi:10.1016/j.stueduc.2012.04.001

Haydon, T., MacSuga-Gage, A. S., Simonsen, B., & Hawkins, R. (2012). Opportunities

to respond: A key component of effective instruction. Beyond Behavior, 22(1),

23-31. Retrieved from http://www.ccbd.net/Publications/BeyondBehavior

Haydon, T., Marsicano, R., & Scott, T. M. (2013). A comparison of choral and individual

responding: A review of the literature. Preventing School Failure, 57(4), 181-188.

doi:10.1080/1045988X.2012.682184

Heitink, M. C., Van der Kleij, F. M., Veldkamp, B. P., Schildkamp, K., & Kippers, W. B.

(2016). A systematic review of prerequisites for implementing assessment for

learning in classroom practice. Educational Research Review, 17, 50-62.

doi:10.1080/1045988X.2012.682184

Helf, S. (2015). Increasing opportunities for student responding: Response cards in the

classroom. Clearing House, 88(6), 182-184. doi:10.1080/00098655.2015.1115749


241
Heritage, M. (2010). Formative assessment and next-generation assessment systems: Are

we losing an opportunity? National Center for Research on Evaluation,

Standards, and Student Testing (CRESST) and the Council of Chief State School

Officers. CCSSO: Washington, D.C. Retrieved from

https://www.edpolicyinca.org

Heritage, M., & Heritage, J. (2013). Teacher questioning: The epicenter of instruction

and assessment. Applied Measurement in Education, 26(3), 176-190.

doi:10.1080/08957347.2013.793190

Herman, J. (2013, September). Formative assessment for next generation science

standards: A proposed model. International Research Symposium on Science

Assessment. Retrieved from https://www.ets.org/Media/Research/pdf/herman.pdf

Hill, H. C., Beisiegel, M., Jacob, R. (2013). Professional development research:

Consensus, crossroads, and challenges. Educational Researcher, 42(9), 476-487.

Hill, J. (2016). Questioning techniques: A study of instructional practice. Peabody

Journal of Education, 91(5), 660-671. doi:10.3102/0013189X13512674

Himmele, P., & Himmele, W. (2017). Total participation techniques: Making every

student an active learner. Alexandria, VA: Association for Supervision and

Curriculum Development.

Hudesman, J., Crosby, S., Flugman, B., Issac, S., Everson, H., & Clay, D. B. (2013).

Using formative assessment and metacognition to improve student achievement.

Journal of Developmental Education, 37(1), 2-13. Retrieved from ERIC database.

(EJ1067283)
242
Jiang, Y. (2014). Exploring teacher questioning as a formative assessment strategy.

RELC Journal, 45(3), 287-304. doi:10.1177/0033688214546962

Johnson, J., Uline, C., & Perez, L. (2013). Teaching practices from America's best urban

schools: A guide for school and classroom leaders. New York, NY: Routledge.

Kang, H., Thompson, J., & Windschitl, M. (2014). Creating opportunities for students to

show what they know: The role of scaffolding in assessment tasks. Science

Education, 98(4), 674-704. doi:10.1002/sce.21123

Keeley, P. (2013). Is it melting? Formative assessment for teacher learning. Science and

Children, 51(3), 26-28. doi:10.2505/4/sc130510326

Kennedy, M. M. (2016). How does professional development improve teaching? Review

of Educational Research, 86(4), 945-980. doi:10.3102/0034654315626800

Kingston, N., & Nash, B. (2012). Formative assessment: A meta-analysis and a call for

research. Educational Measurement: Issues and Practice, 30(4), 28-37.

doi:10.1111/j.1745-3992.2011.00220

Kintz, T., Lane, J., Gotwals, A., & Cisterna, D. (2015). Professional development at the

local level: Necessary and sufficient conditions for critical colleagueship.

Teaching and Teacher Education, 51, 121-136. doi:10.1016/j.tate.2015.06.004

Kira, E., Komba, S., Kafanabo, E., & Tilya, F. (2013). Teachers’ questioning techniques

in advanced level chemistry lessons: A Tanzanian perspective. Australian Journal

of Teacher Education, 38(12), 66-78. doi:10.14221/ajte.2013v38n12.7


243
Knowles, M. (1973). The adult learner: A neglected species. American Society for

Training and Development. Madison, WI: Gulf Publishing Company. Retrieved

from ERIC Database. (ED084368)

Konrad, M. (2014). Introduction to formative instructional practices. Intervention in

School and Clinic, 50(2), 67-68. doi:10.1177/1053451214536044

Landrum, R. E. (2013). The ubiquitous clicker SoTL applications for scientist-educators.

Teaching of Psychology, 40(2), 98–103. doi:10.1177/0098628312475028

Lauer, P. A., Christopher, D. E., Firpo-Triplett, R., & Buchting, F. (2014). The impact of

short-term professional development on participant outcomes: A review of the

literature. Professional Development in Education, 40(2), 207-227.

doi:10.1080/19415257.2013.776619

Learning Forward. (2013). Standards for professional learning. Retrieved from

https://learningforward.org/wp-content/uploads/2017/09/school-based-

professional-learning-unit-4-packet.pdf

Learning Forward. (2017). A new vision for professional learning. Retrieved from

https://learningforward.org/docs/default-source/getinvolved/essa/

essanewvisiontoolkit

Li, H. (2016). How is formative classroom assessment related to students’ reading

achievement? Findings from PISA 2009. Assessment in Education: Principles,

Policy & Practice, 23(4), 473-494. doi:10.1080/0969594X.2016.1139543

Linquanti, R. (2014). Supporting formative assessment for deeper learning: A primer for

policymakers. Retrieved from http://www.ccsso.org/


244
Lyman, F. (1981). The Responsive Classroom Discussion: The Inclusion of All Students.

In A. Anderson (Ed.), Mainstreaming digest (pp. 109-113). College Park, MD:

University of Maryland Press.

MacSuga-Gage, A. S., & Simonsen, B. (2015). Examining the effects of teacher-directed

opportunities to respond on student outcomes: A systematic review of literature.

Education and Treatment of Children, 38, 211-239. doi:10.1353/etc.2015.0009

Madison-Harris, R., & Muoneke, A. (2012, January). Using formative assessment to

improve student achievement in the core content areas. Briefing Paper, Southeast

Comprehensive Center at SEDL. Retrieved from http://secc.sedl.org

Magno, C., & Lizada, G. (2015). Features of classroom formative assessment.

Educational Measurement and Evaluation Review, 6, 23-31. Retrieved from

https://ejournals.ph

Mandinach, E. B. (2012). A perfect time for data use: Using data-driven decision making

to inform practice. Educational Psychologist, 47(2), 71-85.

doi:10.1080/00461520.2012.667064

Mandinach, E. B., & Gummer, E. S. (2013). A systemic view of implementing data

literacy in educator preparation. Educational Researcher, 42(1), 30–37.

doi:10.3102/0013189X12459803

Marshall, J., & Smart, J. (2013). Interactions between classroom discourse, teacher

questioning, and student cognitive engagement in middle school science. Journal

of Science Teacher Education, 24(2), 249-267. doi:10.1007/s10972-012-9297-9


245
Matherson, B. L., & Windle, T. M. (2017). What do teachers want from their professional

development? Four emerging themes. Delta Kappa Gamma Bulletin, 83(3), 28-

33. Retrieved from http://www.deltakappagamma.org

McCollum, K., & Boles, A. (2013). Protocols and templates for literacy strategies.

Retrieved from https://www.maine.gov/doe

McGlynn, K., & Kelly, J. (2017). Using formative assessments to differentiate

instruction. Science Scope, 41(4), 22-25. doi:10.2505/4/ss170410422

Mehmood, T., Hussain, T., Khalid, M., & Azam, R. (2012). Impact of formative

assessment on academic achievement of secondary school students. International

Journal of Business and Social Science, 3(17), 101-104. Retrieved from

http://ijbssnet.com

Menzies, H. M., Lane, K., Oakes, W. P. (2017). Increasing students’ opportunities to

respond: A strategy for supporting engagement. Intervention in School and Clinic,

52(4), 204-209. doi:10.1177/1053451216659467

Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and

implementation (4th ed.). San Francisco, CA: Jossey-Bass.

Messenger, M., Common, E., Lane, K., Oakes, W., Menzies, H., Cantwell, E., & Ennis,

R. (2017). Increasing opportunities to responds with internalizing behaviors: The

utility of choral and mixed responding. Behavioral Disorders, 42(4), 170-184.

doi:10.1177/0198742917712968
246
Michigan Department of Education (MDE). (2011). Professional learning policy:

Supporting guidance. Retrieved from

https://www.michigan.gov/documents/mde/PL_Guidance_Public_Comment_FIN

AL_111011_368414_7.pdf

Miranda, R. J., & Hermann, R. S. (2015). Teaching in real time: Integrating continuous

formative assessment into inquiry-based classroom instruction. Science &

Children, 53(1), 80-85. Retrieved from ERIC database. (EJ1116136)

Moss, C. M., Brookhart, S. M., & Long, B. A. (2013). Administrators’ roles in helping

teachers use formative assessment information. Applied Measurement in

Education, 26, 205-218. doi:10.1080/08957347.2013.793186

Nagro, S. A., Hooks, S. D., Fraser, D. W., & Cornelius, K. E. (2016). Whole group

response strategies to promote student engagement in inclusive classrooms.

Teaching Exceptional Children, 48(5), 243-249. doi:10.1177/0040059916640749

No Child Left Behind Act of 2001, 20 U. S. C. § 6319 (2002).

OECD (2013). Synergies for better learning. An international perspective on evaluation

and assessment. OECD Reviews of Evaluation and Assessment in Education.

Paris: OECD Publishing. Retrieved from http://www.oecd.org

Oweis, A. (2014, February). The missing link in professional learning: Sustained

relational support. Christian Teachers Journal, 22(1), 24-28.

doi:10.1080/19415257.2016.1146995
247
Patton, K., Parker, M., & Tannehill, D. (2015). Helping teachers help themselves:

Professional development that makes a difference. NASSP Bulletin, 9(1), 26-42.

doi:10.1177/0192636515576040

Patton, M. Q. (2015). Qualitative research & evaluation methods (4th ed.). Thousand

Oaks, CA: Sage Publications.

Pearsall, G. (2018). Fast and effective assessment: How to reduce your workload and

improve student learning. Alexandria, VA: Association for Supervision and

Curriculum Development.

Perrotta, C., & Whitelock, D. (2017). Assessment for learning, In E. Duval, M. Sharples,

& R. Sutherland (Eds.), Technology enhanced learning: Research themes (pp.

127-135). New York, NY: Springer International Publishing.

Piaget, J. (1954). The construction of reality in the child. New York, NY: Basic Books.

Popham, W. J. (2013). Tough teacher evaluation and formative assessment: Oil and

water? Voices from the Middle, 21(2), 10-14. Retrieved from

https://www.ncte.org

Popham, W. J. (2014). Classroom assessment: What teachers need to know (7th ed.).

Boston, MA: Allyn and Bacon.

Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28, 4-13.

doi:10.1002/bs.3830280103

Randel, B., Apthorp, H., Beesley, A. D., Clark, T. F., & Wang, X. (2016). Impacts of

professional development on teacher and student outcomes. Journal of

Educational Research, 109(5), 491-502. doi:10.1080/00220671.2014.992581


248
Regier, N. (2012). Book two: 60 formative assessment strategies. Regier Educational

Resources. Retrieved from http://www.stma.k12.mn.us/documents/DW/

Q-comp/FormativeAssessStrategies.pdf

Roskos, K., & Neuman, S. B. (2012). Formative assessment: Simply, no additives.

Reading Teacher, 65(8), 534-538. doi:10.1002/TRTR.01079

Ruiz-Primo, M. A., & Li, M. (2013). Analyzing teachers’ feedback practices in response

to students’ work in science classrooms. Applied Measurement in Education, 26,

163-175. doi:10.1080/08957347.2013.793188

Sach, E. (2012) Teachers and testing: an investigation into teachers’ perceptions of

formative assessment. Educational Studies, 38(3), 261-276.

doi:10.1080/03055698.2011.598684

Sach, E. (2015). An exploration of teachers' narratives: what are the facilitators and

constraints which promote or inhibit ‘good’ formative assessment practices in

schools? International Journal of Primary, Elementary and Early Years

Education, 43(3), 322-335. doi:10.1080/03004279.2013.813956

Sadler, D. R. (1989). Formative assessment and the design of instructional systems.

Instructional Science, 18, 119-144. doi:10.1007/BF00117714

Sanzo, K., Myran, S., & Caggiano, J. (2015). Formative assessment leadership: Identify,

plan, apply, assess, refine. New York, NY: Routledge.

Sass-Henke, A. M. (2013). Living and learning: Formative assessment in a middle level

classroom. Voices from the Middle, 21(2), 43-47. Retrieved from

https://www.ncte.org
249
Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagné, & M.

Scriven (Eds.), Perspectives of curriculum evaluation (Vol. 1, pp. 39-83).

Chicago, IL: Rand McNally.

Sezen-Barrie, A., & Kelly, G. J. (2017). From the teacher’s eyes: facilitating teachers

noticings on informal formative assessments (IFAs) and exploring the challenges

to effective implementation. International Journal of Science Education, 39(2),

181-212. doi:10.1080/09500693.2016.1274921

Shepard, L. A. (2008). The role of assessment in a learning culture. Educational

Research, 29(7), 4-14. doi:10.3102/0013189X029007004

Shirley, M. L., & Irving, K. E. (2015). Connected classroom technology facilitated

multiple components of formative assessment practice. Journal of Science

Education and Technology, (24)1, 56-68. doi:10.1007/s10956-014-9520-x

Smylie, M. A. (2014). Teacher evaluation and the problem of professional development.

Mid-Western Education Researcher, 26(2), 97-111. Retrieved from

http://www.mwera.org

Spector, J. M., Ifenthaler, D., Sampson, D., Yang, L., Makuma, E., Warusavitarana, A.,

Gibson, D.C. (2016). Technology enhanced formative assessment for 21st century

learning. Educational Technology & Society, 19(3), 58-71. Retrieved from ERIC

database. (EJ1107417)

SRI School Reform Initiative. (2017). Protocols and resources. Retrieved from

http://www.schoolreforminitiative.org
250
Stanley, T., & Alig, J. (2014). The school leader’s guide to formative assessment: Using

data to improve student and teacher achievement. New York, NY: Routledge.

Staunton, M., & Dann, C. (2016). Formative assessment: improvement, immediacy and

the edge for learning. International Journal of Pedagogies and Learning, 11(1),

22-34. doi:10.1080/22040552.2016.1187647

Stefl-Mabry, J. (2018). Documenting evidence of practice: The power of formative

assessment. Knowledge Quest, 46(3), 50-57. Retrieved from ERIC database.

(EJ1165040)

Stewart, C. (2014). Transforming professional development to professional learning.

Journal of Adult Education 43(1), 28-33. Retrieved from ERIC database.

(EJ1047338)

Stiggins, R. (2014). Defensible teacher evaluation: Student growth through classroom

assessment. Thousand Oaks, CA: SAGE Publications.

Stiggins, R., & DuFour, R. (2009). Maximizing the power of formative assessments. Phi

Delta Kappan, 90(9), 640-644. doi:10.1177/003172170909000907

Smart, J., & Marshall, J. (2013). Interactions between classroom discourse, teacher

questioning, and student cognitive engagement in middle school science. Journal

of Science Teacher Education, 24(2), 249-267. doi:10.1007/s10972-012-9297-9

Supovitz, J. (2012). Getting at student understanding—the key to teachers' use of test

data. Teacher College Record, 114, 1-29. Retrieved from ERIC database.

(EJ1001991)
251
Sztajn, P., Confrey, J., Wilson, P. H., & Edgington, C. (2012). Learning trajectory based

instruction: Toward a theory of teaching. Educational Researcher, 41(5), 147-

156. doi:10.3102/0013189X12442801

The Wallace Foundation. (2013, January). The school principal as leader: Guiding

schools to better teaching and learning. Retrieved from

www.wallacefoundation.org.

Tincani, M., & Twyman, J. S. (2016). Enhancing engagement through active student

response. Center on Innovations in Learning. Temple University, PA. Retrieved

from http://www.centeril.org

Tomlinson, C. (1999). The differentiated classroom: Responding to the needs of all

learners. Alexandria, VA: Association for Supervision and Curriculum

Development.

Tomlinson, C. (2014). The bridge between today's lesson and tomorrow's. Educational

Leadership, 71(6), 10-14. Retrieved from http://www.ascd.org

Torrance, H. (2012). Formative assessment at the crossroads: Conformative, deformative

and transformative assessment. Oxford Review of Education, 38(3), 323-342.

doi:10.1080/03054985.2012.689693

Tridane, M., Belaaouad, S., Benmokhtar, S., Gourja, B., & Radid, M. (2015, July). The

impact of formative assessment of the learning process and the unreliability of the

mark for the summative evaluation. Procedia: Social and Behavioral Sciences,

197, 680-685. doi:10.1016/j.sbspro.2015.07.058


252
TRIPOD. (2016). 2015-2016 school year report of student perceptions at Hammond

High School. Retrieved from http://tripoded.com/

Trumbull, E., & Lash, A. (2013). Understanding formative assessment: Insights from

learning theory and measurement theory. San Francisco, CA: WestEd. Retrieved

from https://www.wested.org/online_pubs/resource1307.pdf

Van den Bergh, L., Ros, A., & Beijaard, D. (2015). Teacher learning in the context of a

continuing professional development programme: A case study. Teaching and

Teacher Education, 47, 142-150. doi:10.1016/j.tate.2015.01.002

Van der Kleij, F., Vermeulen, J., Schildkamp, K., & Eggen, J. (2015). Integrating data-

based decision making, assessment for learning and diagnostic testing in

formative assessment. Assessment in Education: Principles, Policy & Practice,

22(3), 324-343. doi:10.1080/0969594X.2014.999024

Vygotsky, L. (1978). Mind in society: The development of higher psychological

processes. Cambridge, MA: Harvard University Press.

Whitney, T., Cooper, J. T., & Lingo, A. S. (2015). Providing student opportunities to

respond in reading and mathematics: A look across grade levels. Preventing

School Failure: Alternative Education for Children and Youth, 59, 14-21.

doi:10.1080/1045988X.2014.919138
253
Whitney, T., Cooper, J. T., & Lingo, A. S. (2017). Increasing student engagement

through opportunities to respond. Kentucky Teacher Education Journal: The

Journal of the Teacher Education Division of the Kentucky Council for

Exceptional Children, 3(2), 1-8. Retrieved from http://digital

commons.wku.edu/ktej/iss2/3

Wiggins, G. (2012). Seven keys to effective feedback. Educational Leadership, 70(1),

10-16. Retrieved from http://www.ascd.org

Wiliam, D. (2012). FEEDBACK: Part of a System. Educational Leadership, 70(1), 31-

34. Retrieved from http://www.ascd.org

Wiliam, D. (2013). Assessment: The bridge between teaching and learning. Voices from

the Middle, 21(2), 15-20. Retrieved from https://www.ncte.org

Wiliam, D. (2014). The right questions, the right way. Educational Leadership, 71(6),

16-19. Retrieved from http://www.ascd.org

Wiliam, D. (2018). Embedded formative assessment (2nd ed.). Bloomington, IN: Solution

Tree Press.

Wiliam, D., & Leahy, S. (2015). Embedding formative assessment: Practical techniques

for K-12 classrooms. West Palm Beach, FL: Learning Sciences International.

Wood, M. B., Turner, E. E., Civil, M., & Eli, J. A. (Eds.). (2016). Proceedings of the 38th

annual meeting of the North American chapter of the international group for the

psychology of mathematics education. Tucson, AZ: The University of Arizona.

Retrieved from http://www.pmena.org


254
Wylie, E., & Lyon, C. (2015). The fidelity of formative assessment implementation:

issues of breadth and quality. Assessment in Education, Principles, Policy &

Practice, 22(1), 140-160. doi:10.1080/0969594X.2014.990416

Yan, Z., & Cheng, E. C. K. (2015). Primary teachers’ attitudes, intentions and practices

regarding formative assessment. Teaching and Teacher Education, 45, 128-136.

doi:10.1016/j.tate.2014.10.002

Yao, Y. (2015). Teacher perceptions of classroom assessment: A focus group interview.

SRATE Journal, 24(2), 51-58. Retrieved from ERIC database. (EJ1083125)

Yin, R. K. (2014). Case study research: Design and method (5th ed.). Thousand Oaks,

CA: SAGE Publications.

Yin, R. K. (2016). Qualitative research from start to finish (2nd ed.). New York, NY: The

Guilford Press.

Yin, Y., Tomita, M., & Shavelson, R. (2013). Using formal embedded formative

assessments aligned with a short-term learning progression to promote conceptual

change and achievement in science. International Journal of Science Education,

36(4), 531-552. doi:10.1080/09500693.2013.787556


255
Appendix A: Project Study

The project study consists of three (390 minute) professional development

sessions and of year-long PLC support. Appendix A is divided by session day and

includes agendas, PowerPoint presentations, and materials for each session. The purpose

of the professional development sessions is to provide teachers with instructional

strategies to increase students’ opportunities to respond during formative assessment so

that teachers will have the necessary feedback to make informed instructional

adjustments. Collaboration in PLCs, where teachers can reflect and provide feedback on

formative assessment and OTR implementation practices, was developed to support and

sustain the new learning. The agenda and materials for PLCs are in the final section of

this appendix. The three goals of the project were as follows:

o Project Goal 1: teachers will write clear student learning targets from state

and district standards and align their formative assessment to these

learning targets.

o Project Goal 2: teachers will consistently implement instructional

strategies to give all students opportunities to respond during formative

assessment so they can collect adequate feedback about student

understanding.

o Project Goal 3: teachers will use the formative assessment feedback they

collected to adjust their instruction to help students meet learning goals.


256

Day 1 Professional Development Session Agenda and Resources


257
DAY 1 AGENDA

Time Activity
Allotted
35 Welcome and Introduction
minutes • Group ice breaker
• Facilitator introduction and purpose for professional development
• Go over norms and agenda
• Teacher Formative Assessment Practices survey—Pre-assessment
• Learning targets for the day
60 Introduction to Formative Assessment
minutes • Defining Formative Assessment activity
• Presentation: Benefits of formative assessment, brief overview of
research linking regular formative assessment and student
achievement, research about teacher FA use
• Group discussion—experiences with formative assessment in the
classroom
• Share out—challenges, successes, wonderings
10 Break
Minutes

35 Text Discussion
minutes • Teacher will have preread the article by Tomlinson, C. (2014). The
bridge between today's lesson and tomorrow's. Educational
Leadership, 71(6), 10-14.
• Share thoughts using "Four A's Text Protocol" with table group
and chart three notable statements
• Whole group share out
• FA—visual connections
75 Clear Learning Targets Introduction
minutes • Learning Target Anticipation Guide Part 1
• Presentation: What are learning targets, why they are needed,
connection to FA, what research says, reflect on your LTs
• Basics of clear learning targets, learning how KUDs (Tomlinson)
developing learning targets
• FA—ABC Activity
• Learning target structure and FA practice
• Communicating learning targets to students—share out and
suggestions
258
60 Lunch
minutes

45 Writing Clear Learning Targets


minutes • FA—Learning Target Anticipation Guide Part 2
• How to go from standards to “I can” statements—4 steps
• Practice writing learning targets from standards—use LT Planning
Sheet

60 PLC Work Time—Writing Clear Learning Targets


minutes • Break out—time with department PLCs to work on 1st marking
period learning targets
• Brief share out of accomplishments

10 Closing Remarks
minutes • Share “Ah Ha” moments and take-aways
• Exit slip FA—Why are clear learning targets important to formative
assessment implementation? Compare your current learning target
practices to what you learned today.

390 min
259
260
261
262
263
264
265
266
267
268
269
270
Teacher Formative Assessment Practices Survey

Ratings

Post-Assessment
Pre-Assessment

Assessment
Mid-Term
Never Rarely Sometimes Very Often Always
1 2 3 4 5

Learning Targets
1. I usually break down a content standard into
many learning targets (as opposed to 1 or 2)
2. I write all standards into student friendly “I can”
statements.
3. I include knowledge and skills into learning
targets.
4. I make sure my formative assessment questions
align with my learning targets.
5. I communicate the learning targets with students
each lesson.
6. At any given time, my students could state the
learning target for the lesson.

FA Practices
7. I use formative assessment in my class several
times each day.
8. I stop several times during each class to check
whether all students understand what I am
teaching (not just getting a couple of students’
responses).
9. After I ask a question to check for student
understanding, the majority of my students
participate by giving an answer.
10. When I am teaching, I regularly have evidence
about what most of my class is
thinking/understands (as opposed to only a few
students or no students).
11. I have a good command of techniques to
encourage student participation when I ask
questions to check if students understand what I
am teaching.
271

Mid-Term Assessment
Ratings

Post-Assessment
Pre-Assessment
Never Rarely Sometimes Very Often Always
1 2 3 4 5

FA Practices continued
12. Students in my class wait to be called on to give
answers (as opposed to calling out the answers).
13. I use thumbs-up/five fingers to check for student
understanding.
14. I use individual whiteboards to check for student
understanding.
15. I use response cards to check for student
understanding.
16. I use choral response to check for student
understanding.
17. I use clickers to check for student
Understanding.
18. I use other technologies to check for student
understanding (websites, apps, etc.).

Student Feedback and Adjusting Instruction


19. I know what the majority of my students wrote
down for their warm-up answers each day.
20. I use information I learned from warm-ups to
adjust my instruction that day or to reteach
concepts.
21. I find it quick and easy to determine what all my
students understand throughout the class period.
22. When I learn that students do not understand
something, I immediately stop and reteach.
23. I recheck for student understanding after I have
retaught or re-explained a concept in which they
struggled.
24. At the end of the class hour, I give and collect
evidence about whether students understood what
I taught that day.
25. I use information from responses I collected from
exit slips to plan my next lesson.
272
Defining Formative Assessment Activity Handout

Materials: chart paper, markers, tape, sticky notes

Procedure:
1. Each member of the group will individually write what they believe are
components of a definition of formative assessment on self- adhesive sticky notes,
one attribute or idea per note.

2. In groups of four or five, share the attributes written on the sticky notes, clustering
similar ideas together.

3. Look for similarities and record them on a paper.

4. Come to consensus on the key points to include in a definition of formative


assessment.

5. As a group, construct a definition using the key points generated.

6. Write your group definition on the poster paper, underlining your key
components.

7. Groups will share out their definitions.

8. As a whole group, we will craft a final school definition so all staff will have a
common understanding of formative assessment.
273
Four “A”s Text Protocol

Purpose: To explore a text deeply in light of one’s own values and intentions

Procedure:
1. The group reads the text silently, highlighting and writing notes in the
margins or on sticky notes and then answers the following four questions:
• What Assumptions does the author of the text hold?
• What do you Agree with in the text?
• What do you want to Argue with in the text?
• What parts of the text do you want to Aspire to (or Act upon)?

2. In a round, have each person identify one assumption in the text, citing the
text (with page numbers, if appropriate) as evidence.

3. Either continue in rounds for each of the remaining “A”s, taking them one at a
time. What do people want to agree with, argue with, and aspire to (or act
upon) in the text? Try to move seamlessly from one “A” to the next, giving
each “A” enough time for full exploration.

4. End the session with an open discussion framed around a question such as:
What does this mean for our work with students?

Variation:
Groups can add their own “A”s such as Alignment: What is the current reality,
and what is the gap between where we are and our aspirations?

Source: SRI school reform initiative. (2017). Protocols and resources. Retrieved from
http://www.schoolreforminitiative.org/download/four-as-text-protocol/
274
Learning Target Anticipation Guide

Directions: Read each statement carefully and place a check in one of the “Before”
columns that represents your opinion. Place your papers flipped over in the center of
your table. After the lesson, you will revisit your first opinions and place a check in one
of the “After” columns. Be prepared to defend any of your responses.

Before After

Agree Disagree Statement Agree Disagree


1. Learning targets should be broad
statements that describe what students
should know after a lesson.
2. Each content standard can translate
into a single learning target that can be
written in student-friendly language.
3. KUDs, as they relate to learning
targets, stand for Knowing,
Understanding, and Defining.
4. Research shows a link between
learning targets and student achievement.
5. Formative assessment questions asked
in class should directly or indirectly align
with the day’s learning target.
6. “I can understand how the legislative,
executive, and judicial branches of
government work” is an example of a
clear learning target.
7. Posting the learning target in a visible
place is the best way to communicate
learning targets to students.
8. The verbs in all the learning target
statements should be high on Bloom’s
list so students learn at a deeper level.
9. All learning targets should include
how the student will be assessed.
10. Students who can identify what
target they are learning significantly
outscore those who cannot.
***I believe that I have been correctly
writing my student learning targets. ***
275
ABC Brainstorm Activity

Directions: Think of what you know about learning targets. What action verbs do you
associate with learning targets? What do you want your students to be able to do in their
“I can” statements? Each space below represents the letter in which a verb starts. When
time begins, write down as many of verbs associated with learning targets as you can—
up to two verbs per letter.

A _______________________________ N _______________________________

B _______________________________ O _______________________________

C _______________________________ P _______________________________

D _______________________________ Q _______________________________

E _______________________________ R _______________________________

F _______________________________ S _______________________________

G _______________________________ T _______________________________

H _______________________________ U _______________________________

I _______________________________ V _______________________________

J _______________________________ W ______________________________

K _______________________________ X _______________________________

L _______________________________ Y _______________________________

M ______________________________ Z _______________________________
276
Learning Target Verbs by Level of Complexity

Remember–

list choose repeat


label state choose
name underline match
tell arrange define
describe recognize memorize
select find identify

Understand

summarize demonstrate show


execute translate illustrate
classify predict interpret
interpret contrast restate
rephrase explain estimate
compare outline discuss

Apply

calculate develop sketch


model use execute
complete solve perform
apply construct conduct

Analyze

categorize contrast theorize


analyze simplify debate
classify distinguish appraise
compare differentiate inspect
diagnose relate test

Evaluate

conclude prove interpret


investigate support measure
justify decide recommend
interpret choose argue
evaluate defend assess
determine deduct compare

Create

compose develop invent


integrate formulate propose
combine modify devise
create predict establish
build design synthesize

Source: Anderson, L. W., Krathwohl, D. R. (Eds.) (2001). A taxonomy for learning, teaching, and
assessing: A revision of Bloom's Taxonomy of educational objectives (Complete ed.). New York, NY:
Longman.
277
Learning Target Planning Sheet

Unit

Standard(s):
What standards will I
be addressing in this
unit?

Understand:
General learning
statement or the big
picture concept

What do I want
students to
understand at the end
of the unit?

Do:
“I can” statements
with an action verb
(measurable); skills
or products

What do students
need to do in order to
understand the big
picture concept?

Know:
“I can” statements
(list, name, define,
label…) OR
A list of vocabulary,
facts or rules needed
to know for the “Do”

What do students
have to know to do
the skill or create the
product?
278

Day 2 Professional Development Session Agenda and Resources


279
DAY 2 AGENDA

Time Activity
Allotted
20 Welcome
minutes • Warm-up question—graphic organizer
• Review of Day 1’s exit slips
• Go over norms
• State learning targets for the day

25 Introduction to OTRs
minutes • Reflection
• Problem with limited feedback during FA
• Partner share
• Presentation: What is OTR, why is it used, benefits, research
linking OTR and participation during formative assessment

20 Verbal and Gestural OTRs


minutes • Present information and examples of these two types of OTRs
• Practice OTRs with participants
• Consensus for gestural strategies for classroom signs

40 Writing OTRs—Response cards and whiteboards


minutes • Presentation: What they look like, what they are used for, how to
implement, ideas
• Create a set of response cards and whiteboard
• Practice lesson using OTR strategies—you are the students

10 Break
minutes
60 Writing OTRs—Extended response
minutes • Presentation: Extended response OTRs - how to implement
• Extended response ideas- list with examples
• What to do after collecting OTRs
• Plan and develop three extended response OTRs
• OTRs in action—Video clips of teachers using OTRs
• Setting student expectations and managing OTR materials

60 Lunch
minutes
280
80 Practice teaching with OTRs learned
minutes • Steps to implementing OTRs and implementation rates
• Teachers meet in department PLCs.
Goal: Teach an 8-minute lesson from a 1st marking period standard.
Start with the learning targets and use at least 3 different OTR
strategies you learned today.

65 Role-play lesson share


minutes • Each PLC group gives their lesson (6 departments—English/LA,
science, mathematics, health/phys. ed., social studies,
art/music/electives)

10 Closing Remarks
minutes • Share “Ah Ha” moments and take-aways
• FA—exit slip

390 min
281
282
283
284
285
286
287
288
289
290
Written OTRs—Extended Response Handout

Any written OTR formative assessment strategy below can be used to collect an extended
written response from all students to check for understanding. Many of these strategies
can be used as an exit slip. Responses should be collected and reviewed and student
feedback used to determine next steps for instruction.

Exit Slips Before students leave at the end of class, ask them a question or
pose a problem for them to solve. Students record their responses
on a half sheet of paper, an index card, or a sticky note. Collect the
exit slips as the students leave the classroom. Glance through the
exit slips to determine if students generally understand the topic or
whether you need to provide further whole class or small group
instruction in a particular area. Separate the exit slips into piles,
indicating students who have mastered the learning targets or are
well on their way to doing so, students who are making steady
progress, and students who need additional one-on-one or small
group instruction. Exit slips can be used to create groupings for the
next day’s lesson and activities can be planned based on the
students’ responses.

One Sentence Asking students to give you a one sentence summary of what they
Summaries learned provides you with information about what your students
know about a topic. Give students time to reflect on their learning
and encourage students to think about their response. The depth of
the student summaries will indicate their understanding of the topic
or unit to date and provide you with direction for future planning of
lessons. *Alternative: write 3 summaries—one 10-15 words long,
one 30-50 words long, and one 75-100 words long to show their
understanding.
Sentence Stems Sentence prompts can be used to assess students and gather
information about what they understand. Create a sentence starter and
let students respond. For example, they can choose one of the
following:

The most difficult part of the lesson today was…


I understand ... OR I don’t understand ...
The main thing I learned about today’s topic is ...
Two questions I have about what I learned are ...
I could use some help with …
I predict that ... because…
I would like to get better at …
291
Quick Writes Quick writes give teachers a visual of student learning. Provide
students with an open-ended question and set an amount of time for
having them write-from 2 - 5 minutes. Tell students not to worry
about the conventions of writing but rather focus on getting their
ideas down on paper. When the time is up, ask students to put their
pencils down. Look through the quick writes for valuable information
regarding the knowledge and understanding your students have about
a given topic.
One Minute The one-minute essay is a quick formative assessment strategy that
Essay allows you to gauge student understanding of a particular topic. Pose
a question to the students and have the students respond. Tell the
students they have one minute to write down their response. Ensure
the question you ask can be answered in one minute. Use questions
that cause students to reflect on their learning. Use Bloom’s
Taxonomy of question starters to help create high-level questions.
Learning Logs Learning logs are notes students make during a unit of study. Time is
set aside at the beginning or end of class for students to write about
what they have learned, list any questions they may have about the
topic, or make connections between the topic and their own lives.
Learning logs provide you with valuable information about what
students understand and possible directions for future instruction.
3 - 2 - 1 The 3-2-1 strategy is a quick way to gain information about the level
of understanding students have about a current unit of study. Ask
students to jot down 3 things they have learned about a topic, make 2
personal connections, and state 1 thing that is unclear or give 3
differences between ___ and ___, 2 similarities between ___ and ___,
and 1 question you have on the topic (can do many variations).

Above strategies and descriptions selected from Regier, N. (2012). Book Two: 60
Formative Assessment Strategies. Regier Educational Resources.

Graphic Graphic organizers give students a visual template to write down


Organizer what they know in an organized way. Good to check for student
understanding after a lesson – students can be directed to not use
their notes to show a deeper understanding. Links for templates:
http://www.teach-nology.com/worksheets/graphic/
https://www.eduplace.com/graphicorganizer/
http://freeology.com/graphicorgs/
Examples: Venn Diagram, Tree Chart, Concept Web, Cause-Effect,
T-Charts, Flow-Chart, Compare/Contrast, Mind Map
I Used to Ask students to compare their ideas from the start of the lesson to
Think…But their ideas at the end of the lesson. This is a good way to see if
Now I Know misconceptions were cleared up or it there are gaps in their learning.
292
Triangle, Students write down a Triangle idea—three main points that they
Square, Circle learned in the lessons, a Square idea—something that “squared” or
agreed with what they previously knew or thought, and a Circle
idea—something going around in their head that they don’t quite
understand or that they wonder about.
Misconception Present students with a common misconception about the current
Check concept, principle, or process they are learning. Ask them whether
they agree or disagree with the statement and explain why. They
should show specific examples or state pieces of evidence in their
defense.
UPS Check This strategy works well with any problems that you want students to
do in order to show their understanding. The “U” stands for
understanding the problem first. The student will write what the
problem is asking for in his own words. The “P” represents planning
out the steps that you are going to use, the student writes or depicts
the steps used to solve the problem. The “S” stands for solving the
problem. At this point the student solves the problem. Finally, the
“Check” asks students to make sure that the answer makes sense –
give the reasoning.
Quick Draws Give students the task of drawing out a concept or idea that they
with learned in the lesson. They should label or explain what each part of
Explanation their drawing means as a way of explaining their thinking. An
alternative is to have students answer the question: My picture
represents ________ because ________.
Spot the Similar to the Misconception Check, students are presented with a
Mistake problem, statement, visual, equation, or even paragraph with a
deliberate mistake(s) in it. They are to explain what the mistake(s) is,
why the information is incorrect, and state what the correct answer
should have been. This will let the teacher know if students
understand on a deeper level.
Use the ABC strategy as a pre-assessment before writing or as a way
ABC to assess what students have learned about a learning target or topic.
Brainstorm Have students write the alphabet on paper or have a pre-printed sheet
available. They associate a letter of the alphabet with a vocabulary
term or key idea that indicates their understanding. Students try to
fill as many letter spaces as possible.
Word Sort Students are given a set of vocabulary terms and they must place
them in logical categories (graphic organizer) and write their
justification for the categories. Students could also be given the
categories and justify in which category they would place each word.
293
Video Observations of Whole Group OTR Implementation Handout

1. Watch the video clips: https://www.teachingchannel.org/videos/show-your-


cards-student-assessment (5:03) and
https://www.youtube.com/watch?time_continue=68&v=xQErAKiSc68
(1:41)

a. What OTR strategy did you see being used and how did the teachers
implement it?

b. What are some noticings and wonderings you have about your
observations?

2. Watch the video clip https://www.youtube.com/watch?v=8yFaZxprJEU (3:30)

a. What OTR strategy did you see being used and how did the teachers
implement it?

b. What are some noticings and wonderings you have about your
observations?
294
3. Watch the video clip: https://www.teachingchannel.org/videos/student-daily-
assessment (4:35)

a. What OTR strategy did you see being used and how did the teachers
implement it?

b. What are some noticings and wonderings you have about your
observations?

4. Watch the video clip: https://www.teachingchannel.org/videos/ups-strategy-


as-assessment-tool (2:19)

a. What OTR strategy did you see being used and how did the teachers
implement it?

b. What are some noticings and wonderings you have about your
observations?
295
Implementation Think-Pair-Share Handout

1. Think: a. What are some possible ways that you could implement verbal,
gestural, or written (response cards, individual whiteboards, or extended response)
strategies in your classroom?

b. Have you had experience implementing any of the OTR strategies in


these sessions in the past? (explain)

2. Pair: Find a partner that matches the symbol you were given on the back of your
paper.
a. Discuss your answers to questions 1a and 1b above.

b. Together:
1) Write three statements that you would like to share from your
discussion.

2) Name a possible challenge of implementing a specific OTR and give


some possible solutions.

3. Share your statements and ideas with the whole group when the presenter
signals.
296

Day 3 Professional Development Session Agenda and Resources


297
DAY 3 AGENDA
Teachers need to bring laptops

Time Activity
Allotted
15 Welcome
minutes • Warm-up questions using Survey Monkey
• Review of Day 2’s exit slips
• State learning targets for the day

20 Technological OTRs
minutes • Presentation: Why use tech OTRS, benefits, ideas how to use
• Tech OTR scenario
• Tech and classroom management ideas

75 PD Choices
minutes • Teachers are given a choice of two different PD tech OTR options
Option A: Clickers Option B: Google Forms
• Teachers will attend a presentation about their technology and then
given time to use the tools to create a FA to use in their 1st unit

10 Break
minutes
95 Exploring Other Technological OTRs
minutes • Whole group activities: Kahoot, Mentimeter, Quizlet Live, Padlet,
Socrative, Quizziz, and Plickers
• Teachers will be given a list of other online technology and time to
explore how they could use them in their classrooms as OTRs

60 Lunch
minutes

80 FA Questioning during OTRs


minutes • Reflection on giving warm-ups
• Implementing warm-ups—Think-Pair-Share activity
• Presentation on formative questions for OTR: planning, gaining
deeper understanding, and balancing questions
• Modeling questioning for understanding with response cards.
• Reflection
298
• Reading activity—Chapter 1: “More effective questioning” from
Fast Effective Assessment (2018) by Glen Pearsall. Found free at
the following website:
http://www.ascd.org/publications/books/118002/chapters/More-
Effective-Questioning.aspx (free Chapter online)
• Use “Final Word” Protocol to discuss article in groups
• FA of article by using clickers
• Effective questioning during formative assessment
• Reflection

35 Closing
minutes • Pair-Share Move activity—summary of learning
• Teacher Formative Assessment/OTR Commitment Form
• Next steps for professional development—PLCs
• End of PD evaluation survey on Google Forms

390 min
299
300
301
302
303
304
305
306
307
Technological Opportunities to Respond

Name and Web Address Description

AnswerGarden Online polling. This real-time tool allows


https://answergarden.ch/ teachers to see student feedback when asking
questions to check for student understanding.
AnswerPad A blank page that functions like an individual
http://www.theanswerpad.com/ whiteboard for each student.
Clickers Teachers use a software program where they
(personal response devices) can ask questions to check for student
understanding. Students use a device to input
their answers anonymously. The teacher can
see (or post for students to see as well) real-
time feedback to immediately address
misunderstandings.
Formative This site provides teachers with the
https://goformative.com/ opportunity to check for student
understanding by asking questions, receiving
the results in real time, and then providing
immediate feedback to students.
Google Forms A Google Drive application that allows
https://www.google.com/forms/about/ teachers to create documents that students use
to take formative assessment quizzes. Real-
time data response software allows teachers to
quickly analyze data by question.
Kahoot A game-based classroom response system.
https://kahoot.com/ This fast-paced, fun quizzing game can be
used during formative assessment to see
student answers in real-time, to give
immediate feedback to students, and to
reteach content.
Mentimeter Fill presentations with questions to ask
https://www.mentimeter.com/ students to check for understanding. Real-
time results help teachers adjust instruction.
Padlet Students can share responses by posting onto
https://padlet.com/ an online “board.” Great for exit tickets.
Quick to make and share.
Pear Deck Add-on to Google Slides. Allows you to make
https://www.peardeck.com/ interactive presentations where students can
follow along on their own device and
participate in formative assessment activities.
308
Plickers Check for understanding in classrooms with
https://www.plickers.com/ limited technology—only need one teacher
device. Print answer cards from the website.
Students are given a card. Each code card can
be turned in four orientations for the answers
A, B, C, D. Use the Plickers mobile app to
scan the answers students hold up on their
cards and see a bar graph of responses.
Poll Everywhere Once students record their response on a
https://www.polleverywhere.com/ device, the results can be displayed on a
screen in real-time. Use this tool as a way to
collect immediate formative data in any
content area.
Quia Create games, quizzes, surveys, and more to
https://www.quia.com/web check for student understanding. Can access a
database of existing quizzes from other
educators to save time.
Quizalize Fun classroom team games. Instantly know
https://www.quizalize.com/ who needs help and what they need help with.
Effortlessly assign follow-up activities that
boost student results.
Quizizz Use in class as teams or as self-paced quizzes
https://quizizz.com/ to assess and engage students. Can also assign
a quiz to be completed as homework. Use
reports by class and student to help reflect on
teaching and provide a gauge as to what
students have learned.
Quizlet Live Create flashcards, tests, quizzes, and study
https://quizlet.com/features/live games that are engaging and accessible online
and via a mobile device. Students work in
teams and log on with a code to begin playing
and compete to show their understanding of
vocabulary.
Socrative Educational exercises and games with real-
https://www.socrative.com/ time results that will help determine whether
reteaching is needed or if the students are
ready to move on to new concepts. Review
reports to prepare for future classes.
Triventy Free game platform for students to take
http://www.triventy.com/ quizzes. These live quizzes provide real-time
data on student understanding of classroom
concepts.
309
ZipGrade Turn your phone or tablet into a grading
https://www.zipgrade.com/ machine similar to a scantron. Download
answer sheets for students to fill out their
formative assessment. Instant feedback by
grading exit tickets and quizzes as soon as
they finish. Similar to
https://get.quickkeyapp.com
310
Final Word Protocol

Purpose: This protocol is designed to help participants understand the meaning of a text,
particularly to see how meaning can be constructed and supported by the ideas of others.

After the group’s presenter shares his or her thinking, interesting similarities and
differences in interpretations will arise as other participants share their thinking without
judgment or debate. The presenter listens and may then change his or her perspective,
add to it, or stick with original ideas without criticism.

Procedure:
1. Have each group select a time keeper and beginning presenter (the presenter
of the group will go first, and then pass the role clockwise).

2. All participants may read the same text, or participants may read different
texts on a common topic for a jigsaw effect.

3. Participants read silently and annotate the text. They mark passages for
discussion so they can quickly locate them later. To promote critical thinking,
design prompts for the discussion that ask participants to include reasons for
selecting a particular passage and evidence that supports a particular point.

4. Presenter shares a designated number of passages and his or her thinking


about them.

5. Each participant, in a clockwise format, comments on what the presenter


shared for up to 1 minute.

6. The presenter gets the final word by sharing how his or her thinking evolved
after listening to others or re-emphasizing what was originally shared.

7. Follow steps 4 - 6 with each additional participant taking the role of presenter.

Source: Expeditionary Learning. (2013). Appendix: Protocols and Resources. Retrieved


from https://www.engageny.org/sites/default/files/resource/attachments/appendix_
protocols_and_resources.pdf
311
Pair–Share-Move Activity

Materials: Projected questions, music

Procedure:
1. Have all participants stand.

2. When the music starts, all participants walk around high-fiving others.

3. When the music stops, they pair up with the person with whom they are closest.

4. The presenter projects the first question on the screen:

(1) Why is it important to collect feedback about all students’ understandings


from warm-ups and exit slips?

(2) What are some ideas of how you could collect more feedback from warm-
ups and exit slips by using OTR strategies you learned?

(3) How well do you feel you incorporate effective questioning during your
formative assessment?

(4) What questioning strategies do you plan to integrate into your formative
assessment implementation this school year?

5. The pairs take turns discussing their answers to the question (about 2 minutes).

6. As soon as the music starts again, they must stop talking and start walking around
high-fiving.

7. Continue for several pair-ups for each question.


312
Teacher Formative Assessment / OTR Commitment Form

Name _________________________________________________ Date ___________

Department: ______________________________________________

During the professional development sessions, I learned about (1) clear learning targets to
focus my formative assessment and communicate goals with students; (2) verbal,
gestural, written, and technological OTR strategies to provide all students with an
opportunity to participate during formative assessment; and (3) questioning to further
uncover student understanding during and after OTR responses. My specific plan for
each of the three categories for the _____ - _____ school year will be as follows:

Clear Learning Targets:

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

Formative Assessment OTRs:

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________
313
Questioning During and After an OTR Response:

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

_______________________________________________________________________

Additional Notes or Related Goals:

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

________________________________________________________________________

Please return your commitment sheet to the presenter.


You will receive a copy in your first PLC meeting.

Thank you for all you do!


314
Professional Development Evaluation: Online Google Form

https://goo.gl/forms/lix6nIFvesuNeDyJ2
315
316
Professional Development Evaluation Handout

Think about the professional development sessions and activities that we have
experienced together during our work on formative assessment OTR implementation.

Rate each of the following on a scale of 1 to 5, Rating


with 5 being the highest.

The goals of the professional development sessions were clear.

The presenter was well-organized and knowledgeable.

The amount of work time for group activities was appropriate.

The sessions were engaging.

Activities used to facilitate the professional development experience


were helpful.

Materials and handouts supported the professional development


experience.

The instructional OTR strategies I learned were clearly described and


modeled.

The information I learned in the sessions was relevant and valuable.

This professional development experience will have a positive effect on


my practice.

I left with instructional strategies and ideas that I will be able to


immediately implement in my classroom.

Feel free to add any comments below:


317

PLC Agenda and Resources


318
Suggested Agenda for PLCs

Month 30 minutes Activity


September Meeting 1 • Discuss department goals for FA and OTR use.
• Give time to create response cards for class.
Meeting 2 • Fill out PLC Formative Assessment Reflection.
• Discuss as a group and give feedback.
• Assignment—fill out PLC Action Plan to Increase
OTRs During FA’s “Current level of performance”
before next meeting.
October Meeting 1 • Discuss current level of performance with group.
• Create and fill out “Plan to increase OTRs” on PLC
Action Plan to Increase OTRs During FA.
• Assignment—work to integrate OTRs during FA
by following action plans.
Meeting 2 • Read: Stefl-Mabry, J. (2018). Documenting
evidence of practice: The power of formative
assessment. Knowledge Quest, 46(3), 50–57.
• Use Save the Last Word protocol to discuss article.
• Discuss how action plan is progressing.
November Meeting 1 • Fill out PLC Formative Assessment Reflection.
• Discuss as a group and give feedback.
• Assignment—each teacher asks someone to
observe his class and fills out “Implement plan and
monitor progress” section – should be observed for
at least 2 classes.
Meeting 2 • Discuss results of action plan with group.
• Give constructive feedback.
December/ Meeting 1 • Revisit Teacher FA-OTR Commitment Sheet and
January discuss individual and department goals.
• Take Teacher Formative Assessment Practices
Survey—Mid-year Assessment and turn in to
administration.
• Turn in all 1st semester PLC reflection sheets and
action plans to administration.
Meeting 2 • Fill out PLC Formative Assessment Reflection.
• Discuss as a group and give feedback.
February Meeting 1 • Fill out PLC Formative Assessment Reflection.
• Discuss as a group and give feedback.
• Assignment—fill out PLC Action Plan to Increase
OTRs During FA’s “Current level of performance”
before next meeting.
319
February Meeting 2 •
Discuss current level of performance with group.

Create and fill out “Plan to increase OTRs” on PLC
Action Plan to Increase OTRs During FA.
• Assignment—integrate OTRs during FA
by following action plans.
March Meeting 1 • Read: Duckor, B. (2014). Formative assessment in
seven good moves. Educational Leadership, 71(6),
28-32.
• Use Save the Last Word protocol to discuss.
• Discuss how action plan is progressing.
Meeting 2 • Fill out PLC Formative Assessment Reflection.
• Discuss as a group and give feedback.
• Assignment—each teacher asks someone to
observe his class and fill out “Implement plan and
monitor progress” section— should be observed for
at least 2 classes.
April Meeting 1 • Discuss results of action plan with group.
• Give constructive feedback.
May Meeting 1 • Fill out PLC Formative Assessment Reflection.
• Discuss as a group and give feedback.
Meeting 2 • Revisit Teacher FA-OTR Commitment Sheet from
PD session and discuss progress over the year.
• Take Teacher Formative Assessment Practices
Survey—Post-assessment and turn in to admin.
• Turn in all 2nd semester PLC reflection sheets and
action plans to administration.
Recommended books to read and discuss if more PLC time is available:

Duckor, B., & Holmberg, C. (2017). Mastering formative assessment moves: 7 high
leverage practices to advance student learning. Alexandria, VA: Association for
Supervision and Curriculum Development.
Fisher, D., & Frey, N. (2014a). Checking for understanding: Formative assessment
techniques for your classroom (2nd ed.). Alexandria, VA: Association for
Supervision and Curriculum Development.
Himmele, P., & Himmele, W. (2017). Total participation techniques: Making every
student an active learner. Alexandria, VA: Association for Supervision and
Curriculum Development.
Johnson, J., Uline, C., & Perez, L. (2013). Teaching practices from America's best urban
schools: A guide for school and classroom leaders. New York, NY: Routledge.
*Read Chapter 4
Pearsall, G. (2018). Fast and effective assessment: How to reduce your workload and
improve student learning. Alexandria, VA: Association for Supervision and
Curriculum Development.
320
PLC Formative Assessment OTR Reflection

Name: _______________________________________________ Date: _____________

1. List a couple of your student learning targets from the past week.

2. What formative assessment strategies did you use in the past week to check for
student understanding of the learning targets (ex: warm-up, exit slip, formative
questioning, etc.)

3. What was an OTR technique(s) you used to elicit feedback about student
understanding during a formative assessment strategy?

4. a. What worked well?

b. Any problems or concerns?

5. What did the feedback you collected reveal about student understanding?

6. What instructional adjustments were/will be made as a result of the student


feedback?

7. What were the results of the instructional adjustments (if you made any)? How
do you/will you know if student understanding improved?

Feedback/ideas from colleagues:

Next steps:
321
PLC Action Plan to Increase OTRs During Formative Assessment

Current level of performance


Who will collect I will collect my own data.
the data? I will ask __________________ to collect data.

How will the data Hand Tally Counter Other


be collected?
How many total
whole group Number of OTR
OTRs were given sessions during
during the class class?
period?
During an OTR Average rate per minute =
session, what is
your current rate ___ /____ ___ /____ ___ /____ ___ /____
of OTRs # min # min # min # min
(formative
questioning)? = ___/min = _____/min = ____/min = ____/min
Rate Rate Rate Rate

Plan to increase OTRs


What is your goal
What is your goal
# of total whole
number of OTR
group OTRs in a
sessions per class?
class period?
What is your goal
rate of OTRs? per minute per session

What types of Verbal – choral response


OTRs will you Gestural: ___ thumbs ___fingers ___ other
increase? Written: ___ RCs __ whiteboard ___ extended
response
Technology: ___ clickers ____ other: _____________

How will you


determine
progress?
When are you
going to
implement the
plan?
322
Implement plan and monitor progress

Observation 1 How many total whole group


OTRs were given during class?
During an OTR Average rate =
session, what is ___ /____ ___ /____ ___ /____ ___ /____
your current rate # min # min # min # min
of OTRs?
= ____/min = ____/min = ____/min = ____/min
(each box = 1 OTR
session)
Rate Rate Rate Rate

Observation 2 How many total whole group


OTRs were given during class?
During an OTR Average rate =
session, what is ___ /____ ___ /____ ___ /____ ___ /____
your current rate # min # min # min # min
of OTRs?
= ____/min = ____/min = ____/min = ____/min
Rate Rate Rate Rate

Observation 3 How many total whole group


OTRs were given during class?
During an OTR Average rate =
session, what is ___ /____ ___ /____ ___ /____ ___ /____
your current rate # min # min # min # min
of OTRs?
= ____/min = ____/min = ____/min = ____/min
Rate Rate Rate Rate

Observation 4 How many total whole group


OTRs were given during class?
During an OTR Average rate =
session, what is ___ /____ ___ /____ ___ /____ ___ /____
your current rate # min # min # min # min
of OTRs?
= ____/min = ____/min = ____/min = ____/min
Rate Rate Rate Rate

Average number of OTR sessions per class = ___________

Average rate of OTRs per minutes = ___________


323
Save the Last Word Protocol

Purpose: This during and after reading strategy helps participants really dig deep into a
text to further reading comprehension and interact with the text.

Procedure:
1. Make groups of 3 - 4 participants.
2. Assign the text to read.
3. Each participant should list several quotes he finds interesting as well as why
he selected that quote.
4. Once finished reading, one person begins by sharing his quote. Share the page
so participants can look on. Only read the quote, NOT why it was selected.
5. Each person in the group has one minute to respond/react to the quote that was
shared.
6. When each person has responded, the original participant shares why he
selected that quote.
7. It is important that participants remain vigilant about the protocol. The person
reading the quote can’t agree or disagree with others that are commenting on
his quote. He must wait until the end.
8. This process rotates to the next group member and another person shares his
quote, following the same protocol outlined above.

Variation:
1. Each group writes a summary about the reading to share with the class.
2. The group selects what they feel is the most important quote in the reading
and shares with the class why they selected that quote.
3. Participants write a quote on one side of an index card with the page number
and their name. They pass the card to people in their group and each
person writes a response on the back of the index cards.

Source: McCollum, K., & Boles, A. (2013). Protocols and Templates for Literacy
Strategies. Retrieved from https://www.maine.gov/doe/cte/professional/templates-
protocols.pdf
324
Appendix A References

Anderson, L. W., Krathwohl, D. R. (Eds.) (2001). A taxonomy for learning, teaching, and
assessing: A revision of Bloom's Taxonomy of educational objectives (Complete ed.).
New York, NY: Longman.

Brookhart, S. M., & Moss, C. M. (2012). Learning targets: Helping students aim for
understanding in today’s lesson. Alexandria, VA: Association for Supervision and
Curriculum Development.

Brookhart, S. M., & Moss, C. M. (2014, October). Learning targets on parade.


Educational Leadership, 72(7), 28–33.

Center for Instructional Innovation and Assessment (Producer). (2015, February 9).
Using ABCD cards with Georgianne Connell @ WWU [Video file]. Retrieved from
https://www.youtube.com/watch?v=8yFaZxprJEU (creative commons license)

ECS Region 13. (2017, December 11). Why formative assessment? [Video file].
Retrieved from https://www.youtube.com/watch?time_continue=68&v=xQErAKiSc68

EL Education. (n.d.). Students unpack learning target and discuss academic vocabulary
[Video file]. Retrieved from https://eleducation.org/resources/students-unpack-a-
learning-target-and-discuss-academic-vocabulary

Expeditionary Learning. (2013). Appendix: Protocols and Resources. Retrieved from


https://www.engageny.org/sites/default/files/resource/attachments/appendix_
protocols_and_resources.pdf

Fisher, D., & Frey, N. (2015). Checking for understanding digitally during content area
learning. Reading Teacher, 69(3), 281-286.

Haydon, T., MacSuga-Gage, A. S., Simonsen, B., & Hawkins, R. (2012). Opportunities
to respond: A key component of effective instruction. Beyond Behavior, 22(1), 23-31.

McCollum, K., & Boles, A. (2013). Protocols and Templates for Literacy Strategies.
Retrieved from https://www.maine.gov/doe/cte/professional/templates-protocols.pdf

Regier, N. (2012). Book Two: 60 Formative Assessment Strategies. Regier Educational


Resources.

SRI school reform initiative. (2017). Protocols and Resources. Retrieved from
http://www.schoolreforminitiative.org/download/four-as-text-protocol/
325
Teaching Channel. (Producer). (n.d.). Daily assessment with tiered exit cards [Video
file]. Retrieved from: https://www.teachingchannel.org/videos/student-daily-assessment

Teaching Channel. (Producer). (n.d.). Formative assessment using the U.P.S. strategy
[Video file]. Retrieved from https://www.teachingchannel.org/videos/ups-strategy-as-
assessment-tool

Teaching Channel. (Producer). (n.d.). Show your cards! [Video file]. Retrieved from
https://www.teachingchannel.org/videos/show-your-cards-student-assessment

The Teaching Channel. (Producer). (n.d.). Texting to assess learning [Video file].
Retrieved from https://www.teachingchannel.org/videos/texting-to-assess-learning

The Teaching Channel. (Producer). (n.d.). Using clickers in classrooms [Video file].
Retrieved from https://www.teachingchannel.org/videos/using-clickers-in-classroom

The Teacher Toolkit (Producer). (n.d.). Student response cards [Video file]. Retrieved
from http://www.theteachertoolkit.com/index.php/tool/student-response-cards

References cited in the PowerPoints are found on the last slides of each day’s session.

All photos used in the PowerPoints are from:

Pixabay.com
All images and videos on Pixabay are released under the Creative Commons CC0. They
may be used freely for almost any purpose—even commercially and in printed format.
Attribution is appreciated, but not required.

Pexels.com
All photos on Pexels can be used for free for commercial and noncommercial use.
Attribution is appreciated, but not required.
326
Permissions to Use Educational Videos
327
328

License Grant. Edutopia strives to make Edutopia Resources widely available to improve the K-
12 learning process. With that goal in mind, during and subject to the terms and conditions of
these Terms, Edutopia hereby grants you a limited, nonexclusive, non-sublicenseable,
nontransferable, freely revocable license to access and use Edutopia Resources, for personal or
educational purposes, in order to

• download Edutopia Resources on your personal device;


• include Edutopia Resources in a presentation for use at a conference or workshop;
• print pages from Edutopia Resources for nonprofit and educational uses so long as you
include Edutopia's copyright notice and any other credit, byline, and copyright notice
attributable to specific content on each one of the Edutopia pages you print and distribute
(please list the source as follows: "Originally published (insert publication date) ©
Edutopia.org; George Lucas Educational Foundation"); or
• link to Edutopia.org, use our RSS feeds, or place our embeddable video player of
Edutopia Resources on any kind of Web-based content (including whether such Web-
based content is offered for free or a fee), including sites, blogs, e-textbooks, and online
courses, so long as all forms of display, including links and embeds, are accompanied by
a prominent source link back to the Edutopia Technologies (collectively, the "Automatic
Licensed Uses").
329
Appendix B: Participant Demographics

Participant Gender Age Ethnicity Years of Level of Grade(s) Subject(s) Taught


Teaching Education Taught
Experience
P1 F 40 Caucasian 19 Master 10th/11th Spanish I, II/English
P2 M 27 Caucasian 4 Master 12th Statistics/Chemistry/Financial
Literacy
P3 F 52 Caucasian 6 Master 9th Academic Intervention
P4 M 33 Caucasian 2 Master 10th U.S.
History/Government/Economics
P5 F 53 Caucasian 17 Master 10th Chemistry/Forensics/Meteorology
P6 F 24 Caucasian 1 Bachelor + 11th English
P7 M 52 Caucasian 24 Master 12th Pre-Calculus/ Algebra I
P8 F 54 Caucasian 6.5 Master 10th/11th Geometry/ Financial Literature
P9 M 24 Caucasian .5 Bachelor 9th World History/ Latin American
History
P10 M 37 Caucasian 13 Master 11th Economics/Government
330
Appendix C: Observation Protocol

Participant #: ______ Date: ____/____/____ Period: _____ # of Students _____

Observations Reflections
Setting description:

Evidence of teacher response to


students showing
misconception or
misunderstanding of content
being taught.

What formative assessment


strategies were implemented to
check for student
understanding? (RQ1)
(Describe each assessment)

(If teacher using questioning –


give some direct quotes)
331
When were the formative
assessment strategies
implemented during the
instructional period? (RQ1)

Evidence of the breadth of


feedback the teacher elicited
about students’ current
understanding during FA task:

Did the teacher check for


understanding with all
learners? Many learners? A
few learners? One learner?
(RQ1)

Evidence of adjusting
instruction/change in
instructional plan after FA
implemented.

What instructional
adjustment(s), if any, where
observed as a result of the
information collected from the
formative assessment? (RQ2)
332
Appendix D: Interview Protocol

1. a. In your own words, how would you define formative assessment? (RQ3)
b. If you can, please provide a couple examples of formative assessment.
c. What is the difference between formative assessment and summative
assessment?

2. a. Do you believe there are the benefits of regularly using formative assessment to
check for student understanding?
b. If so, what are they?
c. In your opinion, what is the purpose of formative assessment? (RQ3)

In theory,
3. a. Who should be checked in class for their understanding of a lesson’s learning
goals/targets?
b. When should they be checked?
c. How often should they be checked? (RQ3)

4. a. Do you ever use formative assessment to check for student understanding in


class? (If no -skip to #9)
b. If so, please give examples and explain. (RQ1)

5. a. Discuss how often you typically check for student understanding and why?
(RQ1)

6. a. At what point(s) during a lesson/class period do you typically use formative


assessment to check for student understanding?
b. What is the reason/s you use formative assessment at this time? (RQ1)

7. a. When you want to check for student understanding, how do you decide what
strategy you want to use? (e.g., what things do you consider?)
b. Are your FAs usually planned or unplanned? Explain. (RQ1)
333
8. a. Do you ever adjust your instruction as a result of student feedback from
formative assessment?
b. If yes, how so? (RQ2)

9. a. Are there challenges that keep you from using formative assessment to check
for student understanding with more fidelity? If so, what are they? (RQ3)
b. Similarly, are there challenges that keep you from using formative assessment
feedback to adjust your instruction with more fidelity? If so, what are they? (RQ3)

10. What instances or circumstances might cause you to use formative assessment (to
check for student understanding and to adjust instruction) with more fidelity in
your classroom? (RQ3)

11. How satisfied are you with


a) Your knowledge of FA strategies to check for student understanding?

b) Your actual use of formative assessment to check for student understanding?

c) Your knowledge of using FA feedback to adjust instruction?

d) Your actual use of FA feedback to adjust your instruction? (RQ3)

12. Is there anything else you would like to add that would help me understand your
use of or thoughts about FA to check for student understanding and to adjust
instruction?
334
Appendix E: Teacher Classroom Formative Assessment Log

INSTRUCTIONS: Please fill in this log for a total of three consecutive school days for one class hour of your choosing
within three weeks of receiving this form. If more than one FA strategy was used during the class hour, please document each
and draw a line between them. When you have completed your log, please contact me to pick up your sheets. Thank you!
FA = Formative Assessment

Participant # _______ Class Hour: _______ Subject: __________________________________

Date 1. FA strategy used to 2. Was the 3. Was the FA 4. With this FA, I 5. What did you learn (if 6. How, if at all, did/will
check for student FA planned given before, checked the anything) as a result of you use the feedback
understanding, if any. prior to the after, or understanding of …. giving the FA? from the FA to adjust
Name or description lesson or during A = all students your instruction?
of strategy. unplanned? learning? M =most students
P or U? B, A, or D? F = few students
(can be multiple) O = one student
(or none)
335
Appendix F: List of Possible Formative Assessment Strategies

One-Minute Essay Learning Logs

Concept Maps Cubing

Index Card Summaries Whip Around

Analogy Prompt K-W-L

Warm-ups Paper Pass

Exit Slips Reflection Journal

A-B-C Summaries Questionnaires

Cloze Procedure Inside-Outside Circle

Think-Write-Pair-Share Summary Writing

Formative Questioning Surveys

Muddiest Point Quiz

Four Corners Turn and Talk

3-2-1 Show of Hands

Quick Write Likert Scale

Polls Anticipation Guides

Three Facts and a Fib Matching Cards

Whip Around Frayer Model

Writing Frames Last Word

RSQC2 Odd One Out

Annotated Student Drawings I Used to Think . . . But Now I Know


336
Appendix G: Whole Group OTR Strategies by Category

Any written OTR formative assessment strategy below can be used to collect an extended
written response from all students to check for understanding. Many of these strategies
can be used as an exit slip. Responses should be collected or reviewed and student
feedback used to determine next steps for instruction.

Exit Slips Before students leave at the end of class, ask them a question or
pose a problem for them to solve. Students record their responses
on a half sheet of paper, an index card, or a sticky note. Collect the
exit slips as the students leave the classroom. Glance through the
exit slips to determine if students generally understand the topic or
whether you need to provide further whole class or small group
instruction in a particular area. Separate the exit slips into piles,
indicating students who have mastered the learning targets or are
well on their way to doing so, students who are making steady
progress, and students who need additional one-on-one or small
group instruction. Exit slips can be used to create groupings for the
next day’s lesson and activities can be planned based on the
students’ responses.

One Sentence Asking students to give you a one sentence summary of what they
Summaries learned provides you with information about what your students
know about a topic. Give students time to reflect on their learning
and encourage students to think about their response. The depth of
the student summaries will indicate their understanding of the topic
or unit to date and provide you with direction for future planning of
lessons. *Alternative: write 3 summaries—one 10-15 words long,
one 30-50 words long, and one 75-100 words long to show their
understanding.
Sentence Stems Sentence prompts can be used to assess students and gather
information about what they understand. Create a sentence starter and
let students respond. For example, they can choose one of the
following:

The most difficult part of the lesson today was…


I understand ... OR I don’t understand ...
The main thing I learned about today’s topic is ...
Two questions I have about what I learned are ...
I could use some help with …
I predict that ... because…
I would like to get better at …
337
Quick Writes Quick writes give teachers a visual of student learning. Provide
students with an open-ended question and set an amount of time for
having them write-from 2 - 5 minutes. Tell students not to worry
about the conventions of writing but rather focus on getting their
ideas down on paper. When the time is up, ask students to put their
pencils down. Look through the quick writes for valuable information
regarding the knowledge and understanding your students have about
a given topic.
One Minute The one-minute essay is a quick formative assessment strategy that
Essay allows you to gauge student understanding of a particular topic. Pose
a question to the students and have the students respond. Tell the
students they have one minute to write down their response. Ensure
the question you ask can be answered in one minute. Use questions
that cause students to reflect on their learning. Use Bloom’s
Taxonomy of question starters to help create high-level questions.
Learning Logs Learning logs are notes students make during a unit of study. Time is
set aside at the beginning or end of class for students to write about
what they have learned, list any questions they may have about the
topic, or make connections between the topic and their own lives.
Learning logs provide you with valuable information about what
students understand and possible directions for future instruction.
3 - 2 - 1 The 3-2-1 strategy is a quick way to gain information about the level
of understanding students have about a current unit of study. Ask
students to jot down 3 things they have learned about a topic, make 2
personal connections, and state 1 thing that is unclear or give 3
differences between ___ and ___, 2 similarities between ___ and ___,
and 1 question you have on the topic (can do many variations).

Above strategies and descriptions selected from Regier, N. (2012). Book Two: 60
Formative Assessment Strategies. Regier Educational Resources.

Graphic Graphic organizers give students a visual template to write down


Organizer what they know in an organized way. Good to check for student
understanding after a lesson – students can be directed to not use
their notes to show a deeper understanding. Links for templates:
http://www.teach-nology.com/worksheets/graphic/
https://www.eduplace.com/graphicorganizer/
http://freeology.com/graphicorgs/
Examples: Venn Diagram, Tree Chart, Concept Web, Cause-Effect,
T-Charts, Flow-Chart, Compare/Contrast, Mind Map
I Used to Ask students to compare their ideas from the start of the lesson to
Think…But their ideas at the end of the lesson. This is a good way to see if
Now I Know misconceptions were cleared up or it there are gaps in their learning.
338
Triangle, Students write down a Triangle idea—three main points that they
Square, Circle learned in the lessons, a Square idea—something that “squared” or
agreed with what they previously knew or thought, and a Circle
idea—something going around in their head that they don’t quite
understand or that they wonder about.
Misconception Present students with a common misconception about the current
Check concept, principle, or process they are learning. Ask them whether
they agree or disagree with the statement and explain why. They
should show specific examples or state pieces of evidence in their
defense.
UPS Check This strategy works well with any problems that you want students to
do in order to show their understanding. The “U” stands for
understanding the problem first. The student will write what the
problem is asking for in his own words. The “P” represents planning
out the steps that you are going to use, the student writes or depicts
the steps used to solve the problem. The “S” stands for solving the
problem. At this point the student solves the problem. Finally, the
“Check” asks students to make sure that the answer makes sense –
give the reasoning.
Quick Draws Give students the task of drawing out a concept or idea that they
with learned in the lesson. They should label or explain what each part of
Explanation their drawing means as a way of explaining their thinking. An
alternative is to have students answer the question: My picture
represents ________ because ________.
Spot the Similar to the Misconception Check, students are presented with a
Mistake problem, statement, visual, equation, or even paragraph with a
deliberate mistake(s) in it. They are to explain what the mistake(s) is,
why the information is incorrect, and state what the correct answer
should have been. This will let the teacher know if students
understand on a deeper level.
Use the ABC strategy as a pre-assessment before writing or as a way
ABC to assess what students have learned about a learning target or topic.
Brainstorm Have students write the alphabet on paper or have a pre-printed sheet
available. They associate a letter of the alphabet with a vocabulary
term or key idea that indicates their understanding. Students try to
fill as many letter spaces as possible.
Word Sort Students are given a set of vocabulary terms and they must place
them in logical categories (graphic organizer) and write their
justification for the categories. Students could also be given the
categories and justify in which category they would place each word.
339
Quick Student Self-Assessment OTRs

My This is a quick strategy that can easily be written on an assignment


Windshield that students turn in. Students write “muddy,” “buggy,” or “clear” to
describe their level of understanding. “Clear” windshield is a high
level of understanding, a “buggy” windshield means things are not
totally clear, so they mostly understand but could use more practice
or help, and a “muddy” windshield is so dirty the driver cannot see
where he is going, meaning that the student doesn’t understand.
Make Faces Strategy that can be written on an assignment. The student draws a
smiley face for “I understand!” a straight face for “I somewhat
understand but am not there yet,” and a frown face meaning “I do
not understand yet.”
Proficiency Have three trays available by the door. As students exit, they can
Trays place their exit slips or assignment in the tray they feel best
represents their level of learning such as “I understand well” “I
somewhat understand” and “I do not understand.”
Self-ratings Strategy that can be written anywhere on an assignment. The
student will write a number from 1 through 5 to represent their level
of understanding. This strategy can correspond to gestural “Fist-of-
Five” levels already established in class.
Traffic Light Give students a red circle, a yellow circle, and a green circle (or
square or colored cups). To check for student understanding during
a lesson or unit, ask students questions about their learning. If
students are comfortable with the topic and ready to move on, they
hold up their green circle. If they are fairly comfortable with the
topic, they hold up their yellow circles. Students, who are confused
or require further instruction to understand, hold up the red circle.
This is a quick strategy that provides you with immediate feedback
and direction for your instruction. Can also use this activity during
independent learning and to group students for help.

*With any of the above formative assessments, the teacher can use student feedback to
determine next steps for instruction. Next steps can include stopping the lesson to
address students who are not understanding, placing students into groups based on their
understanding during the current or next class, adding confusing concepts to the next
day’s warm-up for review, or to differentiate future lessons so student learning can be
addressed or extended.
340
Technological Opportunities to Respond

Name and Web Address Description

AnswerGarden Online polling. This real-time tool allows


https://answergarden.ch/ teachers to see student feedback when asking
questions to check for student understanding.
AnswerPad A blank page that functions like an individual
http://www.theanswerpad.com/ whiteboard for each student.
Clickers Teachers use a software program where they
(personal response devices) can ask questions to check for student
understanding. Students use a device to input
their answers anonymously. The teacher can
see (or post for students to see as well) real-
time feedback to immediately address
misunderstandings.
Formative This site provides teachers with the
https://goformative.com/ opportunity to check for student
understanding by asking questions, receiving
the results in real time, and then providing
immediate feedback to students.
Google Forms A Google Drive application that allows
https://www.google.com/forms/about/ teachers to create documents that students use
to take formative assessment quizzes. Real-
time data response software allows teachers to
quickly analyze data by question.
Kahoot A game-based classroom response system.
https://kahoot.com/ This fast-paced, fun quizzing game can be
used during formative assessment to see
student answers in real-time, to give
immediate feedback to students, and to
reteach content.
Mentimeter Fill presentations with questions to ask
https://www.mentimeter.com/ students to check for understanding. Real-
time results help teachers adjust instruction.
Padlet Students can share responses by posting onto
https://padlet.com/ an online “board.” Great for exit tickets.
Quick to make and share.
Pear Deck Add-on to Google Slides. Allows you to make
https://www.peardeck.com/ interactive presentations where students can
follow along on their own device and
participate in formative assessment activities.
341
Plickers Check for understanding in classrooms with
https://www.plickers.com/ limited technology—only need one teacher
device. Print answer cards from the website.
Students are given a card. Each code card can
be turned in four orientations for the answers
A, B, C, D. Use the Plickers mobile app to
scan the answers students hold up on their
cards and see a bar graph of responses.
Poll Everywhere Once students record their response on a
https://www.polleverywhere.com/ device, the results can be displayed on a
screen in real-time. Use this tool as a way to
collect immediate formative data in any
content area.
Quia Create games, quizzes, surveys, and more to
https://www.quia.com/web check for student understanding. Can access a
database of existing quizzes from other
educators to save time.
Quizalize Fun classroom team games. Instantly know
https://www.quizalize.com/ who needs help and what they need help with.
Effortlessly assign follow-up activities that
boost student results.
Quizizz Use in class as teams or as self-paced quizzes
https://quizizz.com/ to assess and engage students. Can also assign
a quiz to be completed as homework. Use
reports by class and student to help reflect on
teaching and provide a gauge as to what
students have learned.
Quizlet Live Create flashcards, tests, quizzes, and study
https://quizlet.com/features/live games that are engaging and accessible online
and via a mobile device. Students work in
teams and log on with a code to begin playing
and compete to show their understanding of
vocabulary.
Socrative Educational exercises and games with real-
https://www.socrative.com/ time results that will help determine whether
reteaching is needed or if the students are
ready to move on to new concepts. Review
reports to prepare for future classes.
Triventy Free game platform for students to take
http://www.triventy.com/ quizzes. These live quizzes provide real-time
data on student understanding of classroom
concepts.
342
ZipGrade Turn your phone or tablet into a grading
https://www.zipgrade.com/ machine similar to a scantron. Download
answer sheets for students to fill out their
formative assessment. Instant feedback by
grading exit tickets and quizzes as soon as
they finish. Similar to
https://get.quickkeyapp.com

You might also like