Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 30

Table of Contents

A Unit 1

1. Handout/Powerpoint Presentation
2. Abstracts of 2 Research Paper on the topics
3. Analysis & Insights on the Research Paper problem and Findings
4. Summative Test Score and Analysis: Strength, Weakness, Improvement
Plan of Action
5. 10 Multiple Choice test Items with TOS
6. Activity Output

B Unit 2

1. Handout/Powerpoint Presentation
2. Abstracts of 2 Research Paper on the topics
3. Analysis & Insights on the Research Paper problem and Findings
4. Summative Test Score and Analysis: Strength, Weakness, Improvement
Plan of Action
5. 10 Multiple Choice test Items with TOS
6. Activity Output

C Unit 3

1. Handout/Powerpoint Presentation
2. Abstracts of 2 Research Paper on the topics
3. Analysis & Insights on the Research Paper problem and Findings
4. Summative Test Score and Analysis: Strength, Weakness, Improvement
Plan of Action
5. 10 Multiple Choice test Items with TOS
6. Activity Output
Unit 1: PRELIMINARY CONCEPTS AND RECENT TRENDS

UNIT 1: PRELIMINARY CONCEPTS AND

RECENT TRENDS

What is Educational Assessment?

The word ‘assessment’ is rooted in the latin word ‘assidere’ which means ‘to sit
beside another’. Assessment is generally defined as the process of gathering
Quantitative and/or qualitative data for the purpose of making decisions.

Assessment is learning is vital to the educational process similar to curriculum


and instruction. Schools and teachers will not be able to determine the impact of
curriculum and instruction on students without assessing learning. Therefore, it is
important that educators have knowledge and competence in assessing learners.

Assessment in Learning

Refers to the process of gathering information about a student’s knowledge, skills,


and abilities during the learning process. Its primary purpose is to provide feedback to both
students and educators to inform instruction and improve learning outcomes. This type of
assessment is typically formative in nature, focusing on helping students understand their
progress and what they need to do to improve.

Assessment and Testing

Assessment and testing are often used interchangeably, but there is a subtle
difference. Assessment is a broader term that encompasses various methods of evaluating
student learning, including tests. Testing specifically refers to the use of standardized
instruments or exams to measure knowledge and skills. Testing is just one method of
assessment.

Assessment and Grading

Assessment and grading are related but distinct processes. Assessment involves
gathering data about a student’s performance, which can be formative (informative, with
feedback to support learning) or summative (evaluative, to assign grades). Grading is the
process of assigning a value (usually in the form of a letter grade or percentage) to a
student’s overall performance in a course or on an assessment. Assessment informs grading,
but grading often simplifies complex learning into a single score or grade.

Different Measurement Framework Used in Assessment:

Assessment can be guided by various measurement frameworks, including:


Norm-Referenced Assessment: This framework compares a student’s performance to that
of a larger group (a norming group). It is commonly used in standardized testing, where
student’s scores are ranked relative to their peers.

Assessment for learning (Formative Assessment): This type of assessment occurs during the
learning Criterion-Referenced Assessment: This framework measures a student’s
performance against specific criteria or learning standards. The goal is to determine
whether students have met predetermined learning objectives.

Assessment and Testing

The most common form of assessment is testing. In the educational context, testing
refers to the use of a battery tests to collect information on student learning over a specific
period of time. It can be categorized as either of the following:

 Selected response (matching-type of test)


 Constructed response (essay test, short answer test)
Objective Format
o Multiple choice
o Enumeration
Subjective Format

o Essays process. It is designed to inform both teachers and students about the
progress being made toward learning objectives. Formative assessments
include quizzes, discussions, peer assessments, and observations. Their
primary purpose is to provide feedback for instructional improvement.
Assessment as Learning (Self-Assessment)

Assessment as learning emphasizes the role of students in assessing their own


learning. It encourages students to reflect on their strengths, weaknesses, and learning
strategies. This type of assessment fosters metacognition and helps students take
ownership of their learning.

Assessment of Learning (Summative Assessment)

Summative assessments occur at the end of learning period to evaluate student’s


overall achievement. Common examples include final exams, standardized tests, and end-
of-term projects. The results are typically used for grading and reporting purposes.

Principles in Assessing Learning

Assessment should have a clear purpose: Assessments should have well-defined goals,
whether they are for feedback, grading, or diagnosis.
Assessment is not an end itself: Assessment should be a means to an end, with the ultimate
goal of improving learning, not just assigning grades.

Assessment is Learner-Centered: It should focus on individual student progress, recognizing


that each student has unique needs and learning trajectories.

Assessment is both Process-and-Product-Oriented: It should evaluate not only the end


product but also the processes students use to reach that product, such as problem-solving
or critical thinking. Assessment must be comprehensive and holistic: It should consider
various aspects of learning, including knowledge, skills, attitudes, and behaviors.

Assessment requires the use of appropriate measures: The methods and tools used for
assessment should align with the learning goals and objectives being assessed.

Assessment should be as authentic as possible: Whenever feasible, assessments should


mirror real-world situations to assess how well students can apply their learning in practical
contexts.

Users of Educational Assessment


Educational assessments serve various stakeholders, each with different purposes:

Teachers: Teachers use assessments for learning to understand their student’s


progress and tailor their teaching methods accordingly. Summative assessments help
them assign grades and evaluate the effectiveness of their instruction.

Students: Students engage in assessment as learning to self-evaluate their progress


and make adjustments to their learning strategies. They also receive feedback from
learning strategies. They also receive feedback from formative assessment that
guides their efforts.

Parents: Parents rely on assessments, especially summative ones, to gauge their


child’s performance and educational progress. This information informs discussions
with teachers and helps parents support their child’s learning journey.

Common Terminologies

Test/Testing

 It is an instrument designed to measure any characteristics, quality, ability,


knowledge, or skill.
 Refers to the use of a test or battery of tests to collect information on
student learning over a specific period of time.
 Measurement
 an instrument or device in the process of measuring individual intelligence,
personality, attitude, or anything that can be expressed quantitatively.
 The process of obtaining a numerical description of the degree to which an
individual processes a particular characteristic.
 It answers the question “How much?”

Evaluation

 Refers to the actual process of making decision or judgment on student


learning based on the information collected from measurement.
 A process of making judgement, assigning value or deciding on the worth
student’s performance.
Recent Trends and Focus in Educational Assessment

Accountability and Fairness

Accountability is the process by which students, teachers, and administrators give an


account of their progress, accountability is a means by which policy makers at the standard
district levels and parents and taxpayers monitor the performance of students and schools.

Outcome-Based Education

Is a system of teaching and learning that focuses all elements of the educational
experience, including teaching, assessment, grading, and reporting on standards that span
all throughout a student’s schooling.

Item-Response Education

Item response theory provides a useful and theoretically well-founded framework


for educational measurement. It supports such activities as the construction of
measurement instruments, and evaluation of test bias and differential item functioning.

Standards-Based Education

Is a system of teaching and learning that focuses all elements of the educational
experience, including teaching, assessment, grading, and reporting on standards that span
all throughout a student’s schooling.
II. ABSTRACT OF 2 RESEARCH PAPER
Students’ and Teacher’s Experiences of the Validity and Reliability of Assessment in a
Bioscience Course
ABSTRACT

This case study explores the assessment of students’ learning outcomes in a second-year
lecture course in biosciences. The aim is to deeply explore the teacher’s and the students’
experiences of the validity and reliability of assessment and to compare those perspectives.
The data were collected through stimulated recall interviews. The results showed that
grades did not always reflect the learning outcomes and that the intended level of
understanding was not always measured. In addition, the teacher and the students thought
that the assessment criteria were unclear, which in turn led to the unreliability of the
assessment. These problems with the validity and reliability of assessment led to
perceptions that the assessment was unfair. The results imply that grades should be
critically evaluated as indicators of the quality of learning outcomes. In addition, practical
implications are discussed.

ANALYSIS: The aim of this study was to construct a comprehensive picture of the
assessment in that one specific bioscience course. In addition, the aim was to explore the
teacher’s and the students’ experiences of the validity and reliability of assessment and to
compare them. The main results were that both the teacher and the students experienced
that assessment was not always valid and reliable. The grades did not always reflect the
students’ learning outcomes.
INSIGHT: Assessment has an important role in higher education because it affects students’
studying and the quality of learning outcomes. Course grades are used as objective
measurements of student achievement. The grades are trusted and relied on for important
decisions and they play a major role in students’ lives. In addition, assessment might have a
profound impact on students’ sense of their own capacities and achievements.

SUMMATIVE TEST SCORE AND ANALYSIS


Strengths:

They have prior knowledge about our topic since it very common and familiar.
They know how assessment developing the students learning.

They have the idea how the recent trends correspond in assessment.

Actively participating.
Weaknesses:
They are confused somehow about the subtopics.
Some terms in the topic are not familiar to them.
There are some terminologies and topics that new to them.
Incomplete Mastery.

Improvement Plan of Action


Make the discussion audible, concise and informative.
Must be stated the meaning of those unfamiliar words.
Must site more examples to avoid misunderstandings and confusion.
Monitor Progress.
ANALYSIS

Overall, Students scoring 28/30 demonstrate the effective of subject to test their
knowledge, active participating, time management, and problem-solving skills. However,
some of them may have of a lack fluency, limited participation, and test anxiety. To improve,
we need to monitor their academic progress and growth and provide enrichment, combat
test anxiety, encourage critical thinking, and maintain regular assessment for them to
participate.

BSED English 3A Quiz Result (10 Items)

Number of Responses: 41

Number of Items Total no. of Correct Answer Total no. of Incorrect


Answer

1 30 11

2 25 16

3 31 10

4 23 18

5 34 7

6 40 1

7 38 3

8 28 13

9 41 0

10 38 3
UNIT II: TARGET SETTING

Performance Standards
-the standards are written to support and promote specific desirable learner’s behaviors in a
particular task. In teaching and learning, the basis for standards is the attainment of minimum
competencies required by the course learning outcomes.

Characteristics of Good Performance Standards

According to Hicks (2020), all excellent performance standards have many of the following
10 characteristics:

1. Clear performance standards are easy to understand.


2. Clear performance standards relate directly to the mission.
3. Tools and technology must support clear performance standards.
4. Performance standards are consistent.
5. Clear performance standard do not be confused with one another.

Assessment Type Selection

★ Principle of Constructive Alignment

★ Varying Assessments

★ Information Collection on Student Performance

Types of Educational Assessment:


Assessment For Learning
This activity entails determining teaching and learning feedback.

Assessment As Learning
This activity into self assessment by learners.

Assessment Of Learning
This will provide information about learning achievement.

Assessment Types
1. Summative Assessment
2. Formative Assessment
3. Evaluative Assessment
4. Diagnostic Assessment
5. Performance-Based Assessment
6. Selective Response Assessment
7. Authentic Assessment
8. Written and Oral Assessment
CONSTRUCTIVE ALIGNMENT

⮚ is a design for teaching in which what it is. intended students should learn, and how they

should express their learning, is clearly stated before teaching takes place. Teaching is
then designed to. engage students in learning activities that optimize their chances of
achieving.

Constructive alignment in practice Below are some steps you might want to consider when
designing assessments for your course.

Step 1: Start with learning outcomes.

Consider the question: What do I want students to know how to do when they leave this
course? Use skills across Bloom taxonomy to write your aims.

Step 2: Choose assessment methods

Consider the question: (What kinds of tasks will reveal whether students have achieved the
learning objectives I have identified? Think of assessment methods that allow students to
demonstrate that the aim(s) of the course have been met. If your course outcome/ aim is for
students to apply analytical skills, you have to make sure that your assessment measures
those skills.

Step 3: Decide on teaching and learning activities

Consider the question: What kinds of activities in and out of class will reinforce my learning
objectives and prepare students for assessments?

CONTENT STANDARDS

⮚ A content standard in education is a statement that can be used to judge the quality of

curriculum content or as part of a method of evaluation. K-12 standards should clearly


describe the specific content that should be taught and learned during the K-12 years,
grade by grade. Content standards articulate an essential core of knowledge and skills
that students should master. Standards clarify what students are expected to know and be
able to do at various points in their K-12 academic career.

In the Philippines, the observation mentioned above are pretty much evident. Our educational
system is hooked up with the process of following the standards for selecting the content. As
to the criteria in the selection of the content, Alvior (2015) suggested the following:

a) INTEREST

b) SELF-SUFFICIENCY

c) SIGNIFICANCE

d) VALIDITY

e) UTILITY

f) LEARNABILITY

g) FEASIBILITY

COMPETENCIES, OBJECTIVES, OUTCOMES AND CHARACTERISTICS OF


OBJECTIVES (SMARTER)

Competencies, objectives and outcomes


-can be written to describe the learning expected of students in individual courses or for a
program as a whole. However, these terms are often used interchangeably to describe desired
learning and teaching practices, when in reality they mean different things.
Learning Competencies
A general statement that describes the use of desired knowledge, skills, behaviors and
abilities. Competencies often define specific applied skills and knowledge that enables people
to successfully perform specific functions in a work or educational setting.
Some examples include:
•Functional competencies: Skills that are required to use on a daily or regular basis, such as
cognitive, methodological, technological and linguistic abilities
• Interpersonal Competencies: oral, written and visual communication skills, as well as the
ability to work effectively with diverse teams
• Critical thinking competencies: The ability to reason. Effectively, use systems thinking
and make judgments and decisions toward solving complex problems
Learning Objectives
A statement that describes what a faculty member will cover in a course and what a course
will have provided students. They are generally broader than student learning outcomes. For
example, “By the end of the course, students will use change theory to develop family-
centered care within the context of nursing practice.” Statements like this help determine
what the student learned and what the teacher taught.
Learning Outcomes
A specific statement that outlines the overall purpose or goal from participation in an
educational activity. These statements often start by using a stem phrase-a starter statement at
the beginning of each learning outcome-such as “students will be able to.” This is then
followed by an action verb that denotes the level of learning expected, such as understand,
analyze or evaluate. The final part is to write is the application of that verb in context and
describe the desired performance level, such as “write a report or “provide three peers with
feedback.” An example of a well- structured outcome statement is: “Students will be able to
locate, apply and cite effective secondary sources in their essays.
CHARACTERISTICS OF OBJECTIVES (SMARTER)
An objective is derived from a goal, has the same intention as a goal, but it is more specific,
quantifiable and verifiable than the goal.
SPECIFIC
The goal must be explicitly defined, without any ambiguity. It cannot be subject to individual
interpretation but must explain specifically what has to be achieved and the type of outcome
expected.
MEASURABLE
Objective includes how the action will be measured. Measuring your objectives helps you
determine if you are making progress.
ATTAINABLE This criterion emphasizes the importance of setting an objective that is
challenging for your team but nonetheless reachable against the existing constraints.
RELEVANT
This means that the goals should be in line with and in harmony with what you actually want
out of life; they should match up with your core values.
TIME-BOUND
This criterion stresses the importance to specify an appropriate time frame for your goal. You
have to set an exact date on when you plan to achieve these goals.
EVALUATED As a manager, it is your responsibility to set challenging goals for you team
but also to support your staff in reaching their targets by assessing the progress on a regular
basis and by providing some recommendations or coaching session to overcome eventual
obstacles.
RECOGNIZED When reaching the end of the time frame defined for the goal execution, run
the final evaluation to assess the success or failure in achieving the objectives.
To conclude everything..S.M.A.R.T.E.R is about giving criteria to guide in the setting of
objectives. S.M.A.R.T.E.R. goal setting takes these two steps further, forcing you to evaluate
and readjust your approach.
DOMAINS OF LEARNING
● are a series of learning objectives created in 1956 by educational psychologist Dr.
Benjamin Bloom.
● it involves 3 categories of education namely: cognitive, affective and psychomotor.
COGNITIVE DOMAIN
● aims to develop the mental skills and acquisition of knowledge of the individual.
Cognitive domain encompases 6 categories which are: knowledge, comprehension,
application, analysis, synthesis, and evaluation.
KNOWLEDGE
● recalling/recognizing information previously learned.
COMPREHENSION
● comprehending/interpreting information based on material previously learned.
APPLICATION
● Selecting and using data to fix a problem independently.
ANALYSIS
● understanding or breaking down assumptions made by a statement /question to make
conclusion.
SYNTHESIS
● Combining ideas to build a new concept or plan.
EVALUATE
● making assessment based on estate criteria.

● BLOOM TAXONOMY
a set of 3 hierarchical models used for classification of educational learning objectives
into levels of complexity and specificity.

AFFECTIVE DOMAIN
● represents skills that fosters appropriate emotional responses.
Five areas of emotional response simple to complex:
Receiving
● involves a passive awareness of emotions and feelings.
Responding
● a student actively engages in the learning process by receiving and reacting.
Valuing
● a student values a concept when they express its worth or what it means to them.
Organizing
● a student develops a value System by arranging their values or beliefs in order of
priority.
Characterizing
● a student acts according to the values they have developed and internalized as
personal philosophy.
Psychomotor Domain
● focuses on physical skills.
Areas of Psychomotor
Perception
● the use of sensory cues to guide their motor activities.
Set
● the feeling of readiness to act upon challenges and resolve them.
Guided Responses
● begins to learn complex skills often through trial and error.
Mechanism
● development of basic proficiency
Complex overt response
● students learn to perform a task with advanced proficiency.
Adaptation
● students have developed their skills and can change them to meet specific
requirements.
Origination
students learn to develop a new skill using principles learned while gaining the original skills-
Revised Bloom's Taxonomy. To provide learners with clearer instructional goals, a group of
researchers led by Bloom's colleague David Krathwohl and one of Bloom's students, Lorin
Anderson, revised the taxonomy in 2001. In the new variant, nouns were replaced by action
verbs.
Solo Taxonomy and Marzano’s Taxonomy are two different frameworks used in education
for classifying and structuring levels of cognitive learning outcomes. They both provide a
structure for understanding and categorizing the complexity of learning objectives and are
often used to guide curriculum development, assessment design, and instructional strategies.
Here’s an overview of each taxonomy:
1. Solo Taxonomy (Structure of Observed Learning Outcomes):
Developed by John B. Biggs and Kevin F. Collis, the Solo Taxonomy is primarily focused on
the levels of understanding and thinking skills that students can attain.

Solo Taxonomy consists of five levels, which progress in complexity:

Pre structural: The learner has no understanding of the topic.


Unistructural: The learner has a single relevant idea or fact about the topic.

Multi structural: The learner has several relevant ideas or facts but lacks integration.

Relational: The learner can integrate ideas and show understanding of the connections
between them.

Extended Abstract: The learner can apply their understanding to new and broader contexts,
showing a high level of abstract thinking.

Solo Taxonomy emphasizes the development of students’ understanding from simple and
disconnected facts to the ability to apply knowledge in complex, real-world situations

2. Marzano’s Taxonomy:

Developed by educational researcher Robert J. Marzano, Marzano’s Taxonomy focuses on


categorizing educational objectives into five domains related to learning processes and
cognitive dimensions.

Marzano’s Taxonomy includes five levels or domains:

Retrieval: This level involves the basic recall of facts, terms, or information.

Comprehension: Learners understand and can explain concepts and ideas.

Analysis: Students can analyze information, make connections, and identify patterns or
relationships.

Knowledge Utilization: This level focuses on the application of knowledge in real-world


situations, including problem-solving and decision-making.

Metacognitive System: At the highest level, learners can self-monitor, reflect, and adapt
their thinking and learning strategies.

Marzano’s Taxonomy is designed to help educators develop more precise learning objectives
and assessments that target specific cognitive skills.

Comparison:

Solo Taxonomy primarily emphasizes the depth of understanding and cognitive complexity,
while Marzano’s Taxonomy includes a broader range of cognitive processes, including
metacognition.

Solo Taxonomy’s focus on levels of understanding aligns closely with Bloom’s Taxonomy
and is often used as a complement to it, whereas Marzano’s Taxonomy offers a unique
perspective on cognitive development.

Educators often choose between these taxonomies based on their specific educational goals,
context, and preferences. Both taxonomies provide valuable frameworks for designing
effective instruction and assessments that promote meaningful learning
UNPACKING COMPETENCIES USING 5P'S

What is Learning Competencies?


–the main idea or skills you expect students to master (there are also called goals")
What is unpacking?
–It is the process of deconstructing student learning outcomes into component parts or
competencies to identify key life-long transferable learning skills and the types of learning
experiences, activities, tasks, and assessments that align with those outcomes.
Purpose: If a lesson is to be taught there must be a good reason for teaching it.
• What are you teaching? Why are you teaching this?
• Where does it fit into the curriculum/Schemes of Learning?
• How will it benefit the pupils?
• Will it increase knowledge, understanding or skills?
• How will you prepare the children for learning?
• How will you remind the children of previous learning?
• How will the children know the Success Criteria (give SC)?
Preparation:
• Are you ready to deliver the lesson?
• Do you have the right resources?
• Is the classroom fit the purpose?
• Do you need any special arrangement for the lessons?
• Are you safe (risk assessment)?
• How will you establish the appropriate atmosphere (behavior for learning)?
Pitch: The pitch of the lesson must ensure that all pupils can access the materials.
• Describe the type and range differentiation required.
• Identify the range of 'Levelness' and what this look like.
• Identify how extensions might be made if necessary (higher or lower). be met?
Pace: It must maintain interest and ensure learning.
• How frequently will the activities change?
• How will the pupils know the time frame for each activity?
• What ways will be used for recording such that pace is maintained?
Progress: You and children must know that progress has been made.
• How will you know that progress has been made?
• When will pupils refer to and reflect on the Learning Objectives?
• How do you know progress has been made?
• How do the children know they have made progress?
• Will there be opportunity for peer assessment?
ABCD OF THE STATEMENT OF OBJECTIVES

● Learning objectives can be identified as the goals that should be achieved by a student at
the end of a lesson. The objectives of a lesson describe the base knowledge and skills we
want our students to learn from our lesson.
● Having specific goals help the logical flow of a lesson. It’s vital that a lesson is tailored to
achieve detailed lesson objectives. In order for the lesson to have a positive and
constructive outcome. Basically, to make sure that students achieve the aim of the lesson.
This process can be simplified by following a basic formula: The ABCD approach. By using
this formula, you will be able to create clear and effective objectives. It consists of four key
elements: (A) Audience, (B) Behavior, (C) Condition, and (D) Degree.
Audience
● First you need to establish what prior knowledge your students have. Assess whether your
students know any of the materials you want to present.
● Prior knowledge can be assessed by giving all students a pre-test or a pre-course quiz. It’s
vital to accurately understand a student’s prior knowledge to avoid misconceptions and
misunderstandings. In this way you can avoid repeating information they already know as
well as adjust your learning objectives accordingly.
● After identifying your Audience by keeping the above-mentioned alternatives in mind
you can start writing down your learning objectives. Usually it starts with a phrase like,
“After reviewing this section, students will be able to…” or “After completing this
activity, learners will be able to…”
Behavior
● It’s quite simple to understand the different behaviors shown by students. By using the
Blooms taxonomy theory, you could classify individuals into three different groups by
assessing their intellectual behavior.
DOMAINS OF BLOOM'S TAXONOMY
● Benjamin Samuel Bloom (1913 – 1999) was an American educational psychologist who
made contributions to the classification of educational objectives and to the theory of
mastery learning.
● Bloom states that learning occurs in three different learning domains: Cognitive,
Affective, and Psychomotor.
Cognitive Domain
● The cognitive domain is further divided into two sub-categories: Cognitive process
dimension and the Knowledge dimension.
1.Cognitive process dimension
● This domain involves the process we use to apply and showcase our intellectual skills.
Ranked from lower to higher order complexities: remember, understand, apply, analyze,
evaluate, and then finally create.
2. Knowledge Dimension
● Students have different ways of showcasing and applying their knowledge just as much as
they learn in different ways:
Metacognitive: Learners focuses on contextualizing, self-knowledge, strategy, and cognitive
tasks.
Conceptual: Learners focuses on theories, assemblies, categories and groupings, ideologies
and generalizations.
Factual: Learners focuses on facts, specific details and terminology
Procedural: Learners focuses on using different algorithms, techniques and methods,
following step-by-step guidelines for specific scenarios.
Affective Domain
● This domain refers to the emotional capability of an individual and in which ways they
act and react towards is. It puts emphasis on five subjective influences such as values,
emotions, motivations, appreciations, and personal attitudes.
● The five levels under the Affective domain refers to Characterizing – To be able to
manage and resolve. Organizing – to be able to formulate, balance and discuss. Valuing –
To be able to support and debate. Responding – To be able to volunteer, work together
and to follow, and Receiving – To be able to differentiate, accept and listen.
Psychomotor Domain
● Psychomotor domain is the learning and combination of old and new skills that involves
physical movements.
Condition
● The third step in the ABC procedure is looking at the different conditions. Ask yourself
when writing your lesson aims and objectives – What conditions am I surrounded in?
This can also refer to specific tools and materials a student may need to apply in the
lesson as well as the classroom situation.
Degree
● The last step in the ABCD Approach is ‘Degree’. This basically refers to the level in
which a learner should perform for it to be seen as credible. The learning objective should
either be at its highest level, which means that the student can produce the aim with
precision and without any mistakes. Leading to the lowest level where the student can’t
produce the aim at all and are making many mistakes.
WRITING OBJECTIVE FROM COMPETENCIES

Competency. The desired knowledge, skills, and abilities (KSAS) a participant needs to
successfully perform specific tasks.

Learning Objective: A very specific statement that describes exactly what a participant will
be able t do after completing the course or program.

Knowledge is the condition of being aware of something that is acquired through training
and/or experience.

Skill is the ability to physically perform an activity or task. It includes physical movement
coordination, dexterity, and the application of knowledge.

Ability is the capacity or aptitude to perform physical or mental activities that are associated
with a particular task.

Writing objectives from competencies involves translating the skills and abilities outlined in
the competencies into specific, actionable goals. Here's a simple process you can follow:

1. Identify Competencies: Start by listing the key competencies relevant to the role or
task.
2. Break Down Competencies: For each competency, break down its components into
more specific elements.
3. Actionable Statements: Transform each component of the competency into a clear,
actionable statement. Use action verbs and be specific.
4. SMART Criteria: Ensure that each objective is SMART: Specific, Measurable,
Achievable, Relevant, and Time-bound.
5. Prioritize and Align: Prioritize the objectives based on their importance and
relevance.
6. Review and Refine: Review the objectives to ensure they accurately capture the
essence of the competencies.
7. Continuous Evaluation: Regularly evaluate progress towards these objectives and
adjust them if circumstances change or new competencies emerge.

Example of Competencies and Learning Objectives


Competency
● Correctly change the oil on an automobile in less than 10 mins
Learning objectives related to the above competency
By the end of this course, the student will able to;
● Locate and remove drain plug

● Determine the appropriate oil weight

● Add the appropriate amount of oil and change the filter.


II. Abstract of 2 research paper on the topics

Title: "The Impact of Target Setting on Employee Performance: A Longitudinal Study"


Abstract: This research paper presents the findings of a longitudinal study that explores the
effects of target setting on employee performance within a corporate setting. The study
tracked the performance of employees over a three-year period and analyzed the influence of
specific, measurable, and time-bound targets on motivation, productivity, and job
satisfaction. The results indicate that well-designed target setting can significantly improve
employee performance and engagement, but excessive pressure from unrealistic targets can
lead to detrimental outcomes.

Title: "Strategic Target Setting in Project Management: A Case Study Analysis"


Abstract: This paper investigates the strategic implications of target setting in the field of
project management through a series of in-depth case studies. The research examines how
project-specific targets, such as cost, timeline, and quality objectives, impact the overall
success of complex projects. The study underscores the importance of aligning targets with
the project's strategic goals and adapting them as circumstances change. It highlights the
critical role of target setting in achieving project success and delivering value to stakeholders.

III. Analysis: Clear Educational Objectives: The school should define clear educational
objectives, such as "Increase the average math test scores by 10% in one academic year."
Alignment with Educational Goals: The objectives should align with the school's broader
educational goals, like improving overall academic performance.
SMART Targets: Educational targets should be Specific (improving math scores),
Measurable (by 10%), Achievable (considering current student abilities), Relevant (to overall
academic performance), and Time-bound (within one academic year).
Assessment of Educational Barriers: Identifying potential barriers to achieving the math score
improvement, such as curriculum gaps or student motivation issues.
Resource Allocation: Allocating resources like hiring additional math teachers, providing
extra study materials, and implementing tutoring programs.

Insights: Clarity and Focus: Strategic target setting provides clarity and focus on what needs
to be achieved. It ensures that everyone involved understands the specific objectives and can
work towards them.
Alignment with Goals: Setting strategic targets ensures alignment with broader organizational
or educational goals. This alignment helps avoid pursuing projects or initiatives that don't
contribute to the overall mission.
Measurability: Targets should be measurable, which means you can track progress and
success. Measuring progress helps identify what's working and what needs improvement.
Resource Allocation: It facilitates effective allocation of resources. By defining strategic
targets, organizations can allocate budgets, personnel, and time where they are most needed
to meet the objectives.

Findings: Improving Academic Performance: Setting strategic targets in education is a key


strategy for improving academic performance. Schools can set targets for improving test
scores, graduation rates, or other educational outcomes.
Measuring Student Progress: Regular assessments and measurements are crucial for tracking
student progress and ensuring that they are on target to meet educational goals.
Involvement of Stakeholders: In education, it's important to involve parents, teachers, and
students in the process. This engagement helps create a supportive environment and aligns all
stakeholders with the educational targets.
Professional Development: Providing opportunities for teachers and staff to improve their
skills and knowledge is vital. It helps in delivering quality education and achieving
educational targets.

VI. TEST SCORE: UNIT TEST – 28/30

Strengths
 I’m taking notes during the reporters discussing
 I listened patiently and participate in discussion
Weakness
 I forgot the other topic that has discussed
 The other topics are difficult to remember

UNIT 3: DESIGNING AND DEVELOPING ASSESSMENTS LESSON


1: CHARACTERISTICS OF QUALITY ASSESSMENT TOOLS ASSESSMENT TOOLS
• Technique or method of evaluating information to determine how much a person or
student know.
• It Is an instrument used to collect data for each outcome
THREE FUNCTIONS OF TEST
1. Provide information that are used for improvement of instruction.
2. Making administrative decisions.
3. Guidance purposes.
PRINCIPLES OF TESTING
1. Measure all instructional objectives. Match all the learning objectives posed during
instruction.
2. Cover all learning tasks. Construct a test that contains a wide range of sampling of
items.
3. Use appropriate test items. Test items must be appropriate to measure learning
outcomes.
4. Make test valid and reliable. Construct a test that is valid to measure what it is
supposed to measure from the students.
5. Use to improve learning. Test should be utilized by the teachers properly to improve
learning.
APPROPRIATENESS OF ASSESSMENT TOOLS
• Objective Test Subjective
• Test Performance assessment
• Portfolio
• Oral Questioning
• Observation technique
QUALITIES OF TEST INSTRUMENT
• VALIDITY. Appropriateness of scored based inferences
. • RELIABILITY. Consistency of measurement.
• FAIRNESS. Test items should not be biased.
• OBJECTIVITY. Agreement of two or more raters concerning score of the student
• SCORABILITY. The test should be easy to score.
• ADMINISTRABILITY. Test should be administered uniformly to all students.
• PRACTICALITY AND EFFICIENCY. Familiarity of methods used, ease of scoring and
interpretation of the test

LESSON 2: TEACHER-MADE TEST


 Teacher-made tests are normally prepared and administered to test students'
classroom achievement, evaluating the teaching method adopted by the teacher and
other curricular programs of the school.
 Teacher-made test is one of the most valuable instruments in the hands of the
teacher to solve his purpose.
TWO GENERAL TYPES OF TEST ITEM
• Selection-type or Objective Test items. • Supply type or Subjective type of Test Items
TYPES OF TEACHER-MADE TESTS
 Selection-type or Objective Test items
A. Multi-choice Test - A multiple-choice test is a widely used format to measure knowledge
and learning outcomes, including comprehension and applications. It consists of three
parts: the stem, the keyed option, and the incorrect options or alternatives.
B. Matching Type Test Matching - type item consists of two columns Column A contains
the descriptions and must be placed on the left side while column b contains the options
and placed on the right side. The examinees are asked to match the options that are
associated with the descriptions
. C. True or False Type True or false tests - are a type of "force-choice" test where students
must determine if a statement is true or false. They are suitable for assessing behavioral
objectives and cognitive knowledge, as they require students to choose between true or
false answers, especially when there are only two plausible alternatives or distracters.
 SUPPLY TYPE or SUBJECTIVE TYPE OF TEST ITEM
A. Completion type or Short Answer Test - is an alternative assessment method where
examinees must provide or create appropriate words, symbols, or numbers to answer
questions or complete statements, rather than selecting from given options. There are
two ways to construct this type of test question form or statement form.
B. Essay Items The extended response essay and restricted response essay are two types of
essay test items used to assess students' ability to organize and present original ideas.
The extended response essay allows students to determine the length and complexity of
their response, assessing their synthesis and evaluation skills. It is particularly useful for
determining whether students can organize ideas, integrate and express ideas, and
evaluate information in knowledge. The restricted response essay, on the other hand,
places strict limits on both content and response, typically restricted by the scope of the
topic and the form of the response indicated in the question.
C. Problem-Solving Test Problem-solving test or computational test - is a type of subjective
test that presents a problem situation or task and required demonstration of work
procedures and correct solution, or just a correct solution. Teacher can assign full of
partial credit to either correct or incorrect solutions depending on the quality and kind
of work procedures presented.
LESSON 3: TABLE OF SPECIFICATION (TOS)
➢ A plan prepared by a classroom teacher as basis for test construction.
➢ A chart or table that details the content and level of cognitive domain assessed on a test
as well as the types and emphases of test items (Gareis and Grant 2008).
IMPORTANCE OF TOS ➢ TOS is very important in addressing the validity and reliability of
test items. ➢ TOS ensures that the assessment is based from intended learning
outcomes
➢ TOS ensures that number of questions on the test is adequate to ensure dependable
results that are not likely caused by chance.
Preparing a table of specification
1. Selecting the learning outcomes to be measured.
2. Make an outline of the subject matter to be covered in the test.
3. Decide on the total number of test items.
4. Distribute and compute the total number of class sessions (mins) consumed on each
topic.
5. Compute the percentage/weight per topic.
6. Distribute the test items on the aligned cognitive domain.

LESSON 4: ASSESSMENT TOOLS DEVELOPMENT ASSESSMENT DEVELOPMENT CYCLE


1.Planning Stage - Determine who will use the assessment results and how will use them.
- Identify the learning targets to be assessed.
- Select the appropriate assessment method and methods
-Determine the sample size.
2. Development Stage
- Develop or select items, exercises, tasks and scoring procedures.
- Review and critique the overall assessment for quality before use.
3. Use Stage - Conduct and score the assessment
- Revise as needed for future use.

Steps in Developing Assessment Tools


1. Examine the instructional objectives and goals of the topics previously discussed.
2. Make a Table of Specification (TOS)
3. Construct the test items.
4. Assemble the test items
5. Check the assembled items.
6. Write directions.
7. Make an answer key.
8. Analyze and improve the test items
ITEM ANALYSIS • Is a process of examining the student’s response to individual item in the
test. It consists of difficult procedures for assessing the quality of the test items given to
the students. Through the use of item analysis we can identify which of the given are
good and defective test items. Good items are to be retained and defective items are to
be improved, to be revised or to be rejected.

LESSON 5: TEST RELIABILITY


• refers to the consistency with which it yields the same rank for individuals who take the
test more then once (Kubiszyn and Borich, 2007).
• the reliability of a test can be determined by means of Pearson Product Moment of
Correlation, spearman-Brown Formula, Kuder-Richardson Formulas, and etc.
FACTORS AFFECTING RELIABILITY TEST
1. Length of the Test
2. Moderate item difficulty
3. Objective scoring
4. Heterogeneity of the student group
5. Limited Time
METHODS OF ESTABLISHING REALITY OF A TEST
1. Test-retest Method. A type of reliability determined by administering the same test twice
to the same group of students with any interval between the tests.
2. Equivalent/Parallel/Alternate Form. A type of reliability determined by administering two
different but equivalent forms of the test to the same group students in close discussion.
3. Split-half Method. Administer test once and score two equivalent halves of the test. To
split the test into halves that are equivalent, the usual procedure os to score the even-
numbered and the odd-numbered test item separately.
4. Kuder-Richardson Formula. Administer the test once. Score the total test and apply the
KuderRichardson Formula. The KR-20 formula is applicable only in situations where
student’s responses are scored dichotomously.

Reliability Coefficient • Is a measure of the amount of error associated with the test scores.
Reliability Coefficient has the following description:
(a) The range of the reliability coefficient is from 0 to 1.0
(b) The acceptable range value is 0.6 or higher
(c) The higher the value of the reliability coefficient, the more the overall test scores (d)
Higher reliability indicates that the test items measure the same thing.
INTERPRETING RELIABILITY COEFFICIENT
1. The group variability will affect the size of reliability coefficient. Higher coefficient results
from heterogenous groups from homogenous groups. As group variability increases,
reliability goes up.
2. Scoring reliability limits test score reliability. If tests are scored unreliable, error is
introduced. This will limit the reliability of the test scores.
3. Test length affects test score reliability. As the length increases, the reliability tends to go
up. 4. Item difficulty affects test score reliability. As test items become very easy or very
hard the test reliability goes down.

LESSON 6: VALIDITY TEST VALIDITY


• Validity is concerned whether the information obtain an assessment permits the teacher
to make a correct decision about a student's learning. This means that the appropriateness
of score-based inferences or decisions made are based on the student's test results. Validity
is the extent to which a test measures what is supposed to measure.
TYPES OF VALIDITY
1. Face Validity - It is the extent to which a measurement method appears "on its face" to
measure the construct of interest. Face Validity at best a very weak kind of evidence that a
measurement method is measuring what it is supposed to.
2. Content Validity - A type of validation that refers to the relationship between test and the
instructional objectives, establishes content so that the test measures what is supposed to
measure. 3. Criterion-Related Validity - A type of validation that refers to the extent to
which scores from a test relate to theoretical similar measures. It is a measure of how
accurately a student's current test score can be used to estimate a score on a criterion
measure, like performance in courses, Classes or another measurement instrument.
a. Concurrent validity - The criterion and the predictor data are collected at the same
time.
b. Predictive validity - A type of validation that refers to a measure of the extent to which
student's current test result can be used to estimate accurately the outcome of the
student's performance at later time.
4. Construct Validity - A type of validation that refers to the measure of the extent to which
a test measures a theoretical and unobservable variable qualities such as intelligence, math
achievement, performance anxiety, and the like, over a period of time on the basis of
gathering evidence.
a. Convergent validity - Is a type of construct validation wherein a test has a high
correlation with another test that measures the same construct.
b. Divergent Validity - Is type of construct validation wherein a test has low correlation
with a test that measures a different construct.
c. Factor Analysis - Assesses the construct validity of a test using complex statistical
procedures conducted with different procedures.
IMPORTANT THINGS TO REMEMBER ABOUT VALIDITY
1. Validity refers to the decisions we make, and not to the test itself or to the
measurements.
2. Like reliability, validity is not an all-or-nothing concept; it is never totally absent or
absolutely perfect
3. A validity estimate, called a validity coefficient, refers to specific type of validity. It ranges
between 0 and 1.
4. Validity can never be finally determined; it is specific to each administration of the test.

FACTORS AFFECTING THE VALIDITY OF THE TEST ITEM


1. The test itself.
2. The administration and scoring of a test.
3. Personal factors influencing how student’s response to the test. 4. Validity is always
specific to a particular group.
VALIDITY COEFFICIENT • The Validity Coefficient is the computed value of the x and y. In the
theory, the validity coefficient has values like the correlation that ranges from 0 to 1. In
practice, most of the validity scores are usually small and they range from 0.3 to 0.5, few
exceed 0.6 to 0.7. Hence, there is a lot of improvement in most of our psychological
measurement.

II. Abstract of 2 research paper on the topics

Designing assessment in a digital world; organizing framework


E-assessment typically seeks to improve assessment designs through the use of
innovative digital tools. However, the intersections between digital technologies and
assessment can be seen as increasingly complex, particularly as the sociotechnical
perspectives suggest assessment must be relevant to a digitally-mediated society. This
paper presents an organizing framework, which articulates this nuanced relationship
between the digital and assessment design. It draws together literature from diverse
domains, including educational technologies, assessment pedagogies and digital literacies.
The framework describes three main purposes for designing the digital into assessment: 1)
as a tool for improvement; 2) as a means to develop and credential digital literacies; and 3)
as a means to develop and credential uniquely human capabilities. Within each purpose, it
offers considerations for assessment and feedback design. In addition, it offers researchers
analytical tools to understand the role of the digital in assessment.

III. Analysis and Insights on the Research Paper problem and Findings

Problem Findings
The final consideration when employing a digital tool to improve assessment design is to
acknowledge the potential for harm as well as the benefits. As mentioned, technology is
usually associated with a belief that innovation will make a positive difference or, at least,
that it will be neutral. For example, there is no SAMR category describing how the digital
might make the student experience worse not better. This is a limitation associated with this
purpose: if you are intending to develop an improvement, it becomes easy to forget that
technologies can have unintended and unproductive impacts, such as steering instrumental
student behaviors (Henderson, Selwyn, and Aston Citation2017). For example, an online
proctoring program may enhance assessment security but simultaneously negatively impact
student experience through raised anxiety levels (Woldeab and Brothen Citation2019).
Similarly, inaccessible assessment designs are a serious concern – no matter how innovative
or transformative – as they might further exclude students with disabilities.

Analysis
Our view is that educators and researchers alike can benefit from explicitly and critically
considering the role of the digital within assessment design. In Bennett et al.’s
(Citation2017) study, university educators describe two main reasons for employing digital
technology as part of assessment design. Firstly, they see it providing efficiencies (real or
imagined). Secondly, they describe its innovative nature and the need to be
“contemporary”. However, beyond this work, there is a striking absence in the literature.
Moreover, there is little information to guide educators who wish to take account of the
digital in assessment design. While there are helpful well-known educational technology and
assessment frameworks, the former do not appear to focus on assessment and the latter do
not appear to consider the digital.
Insight
The first and most obvious purpose for employing digital technologies is as a tool to
improve assessment or assessment processes, including feedback. This purpose appears
both self-evident and predominant. For example, educators design digitally-mediated
assessment tasks, such as e-portfolios, wikis, video tasks and so on, to improve student
learning through assessment. Similarly, they incorporate digital tools, such as automated
grading, to make assessment processes more efficient. However, the nature of this
improvement is often not well articulated. Therefore, within this broad purpose, we
distinguish three areas of consideration for assessment designers: assessment rationales,
level of digital enhancement and potential harms.

You might also like