Professional Documents
Culture Documents
Commander's Handbook For Assessment Planning and Execution
Commander's Handbook For Assessment Planning and Execution
for
Assessment Planning and Execution
Version 1.0
Joint Staff, J-7
Joint and Coalition Warfighting
Suffolk, Virginia
9 September 2011
FREDERICK S. RUDESHEIM
Major General, U.S. Army
Deputy Director, J-7, Joint Staff,
Joint and Coalition Warfighting
PREFACE
1.
Scope
Purpose
Content
This handbook complements and expands upon the overarching concepts and
principles that have been incorporated into keystone joint doctrinal publications, to
include joint publications 3-0, Joint Operations; 5-0, Joint Operation Planning; and 2-0,
Joint Intelligence. It supports requirements of joint operation planning and offers
techniques and procedures currently used in the field. It is intended as a reference for
joint forces conducting assessment as an element of a joint operation.
4.
Development
Application
Preface
6.
Contact Information
Comments and suggestions on this important topic are welcomed. Points of contact
regarding this handbook are Mr. Michael McGonagle, 757-836-9883 DSN 668-9883,
[email protected]; Mr. Marc Halyard, 757-203-5508, DSN 6685508, [email protected]; and Mr. Walter Ledford, 757-203-6155, DSN 668-6155,
[email protected].
ii
TABLE OF CONTENTS
PAGE
EXECUTIVE SUMMARY ...... ...................................................................................... vii
CHAPTER I
ASSESSMENT OVERVIEW
CHAPTER II
ASSESSMENT PROCESS
CHAPTER III
ASSESSMENT COMPONENTS
iii
Table of Contents
CHAPTER IV
DEVELOPING THE ASSESSMENT PLAN
CHAPTER V
STAFF ASSESSMENTS DURING EXECUTION
General ...................................................................................................................
Assessment Periodicity .................................................................... ......................
Effects Assessment .................................................................... ............................
Task Assessment .................................................................... ................................
Deficiency Analysis .................................................................... ...........................
Assessment Summary Development .....................................................................
V-1
V-2
V-3
V-5
V-5
V-8
CHAPTER VI
OPERATIONAL IMPLICATIONS
General ..................................................................................................................
Doctrine .................................................................... ............................................
Organization .................................................................... .....................................
Training .................................................................................................................
Leadership and Education .....................................................................................
Personnel ...............................................................................................................
VI-1
VI-1
VI-2
VI-2
VI-2
VI-3
APPENDICES
A
B
C
D
E
F
G
H
GLOSSARY
Part I Abbreviations and Acronyms ..................................................................... GL-1
Part II Terms and Definitions .............................................................................. GL-3
iv
Table of Contents
FIGURES
I-1 Assessment and Commanders Decision Cycle ............................................. I-3
I-2 Notional Overall Assessment Composition ................................................... I-4
I-3 Assessment Interaction.................................................................................... I-8
II-1 Assessment Process ....................................................................................... II-2
II-2 Evaluating Variances .................................................................................... II-6
III-1 Assessment Interaction ................................................................................ III-2
III-2 Notional Assessment Structure ................................................................... III-3
III-3 Assessment Measures and Indicators .......................................................... III-5
III-4 Measure of Effectiveness Development .................................................... III-11
III-5 Indicator Threshold Development.............................................................. III-14
IV-1 Mission Analysis ......................................................................................... IV-2
IV-2 Assessment Plan Steps ................................................................................ IV-3
IV-3 Example Effect/Measure of Effectiveness/Indicator Matrix........................ IV-7
V-1 Notional Overall Assessment Composition ................................................. V-1
V-2 Staff Assessment .......................................................................................... V-2
V-3 Example Assessment Summary ................................................................... V-4
V-4 Example Effects-to-Task Summary ............................................................. V-6
V-5 Measure of Effectiveness Indicator Analysis Matrix ................................... V-7
V-6 Task Analysis Matrix ................................................................................... V-8
V-7 Sample Assessment Summary ..................................................................... V-9
D-1 Conceptual Framework for Diagnosing a Conflict ...................................... D-4
E-1 Analyzing Causes of Instability .................................................................... E-6
E-2 Tactical Stability Matrix ............................................................................... E-7
F-1 Monitoring Versus Evaluation .................................................................... F-17
F-2 Relative Timing of Project Design, Implementation, Monitoring, and
Evaluation Tasks ......................................................................................... F-18
F-3 World Banks 10 Steps to a Results-Based Monitoring and
Evaluation System........................................................................................ F-19
G-1 Metrics ......................................................................................................... G-1
G-2 Institutional Capacities and Drivers of Conflict .......................................... G-2
G-3 Steps for Measuring Progress in Conflict Environments ............................. G-3
Table of Contents
Intentionally Blank
vi
EXECUTIVE SUMMARY
COMMANDERS OVERVIEW
Assessment is a
commander-centric
process.
The purpose of
assessment is to support
the commanders
decision making.
Assessment Overview
Commanders, assisted by their staffs and subordinate
commanders,
along
with
interagency
and
multinational partners and other stakeholders, will
continuously assess the operational environment and
the progress of the operation toward the desired end
state in the time frame desired. Based on their
assessment, commanders direct adjustments, thus
ensuring the operation remains focused on accomplishing
the mission. Assessment is applicable across the range of
military operations. It offers perspective and insight, and
provides the opportunity for self-correction, adaptation,
and thoughtful results-oriented learning.
Assessment is a key component of the commanders
decision cycle, helping to determine the results of tactical
actions in the context of overall mission objectives and
providing potential recommendations for the refinement
of future plans. Assessments provide the commander
with the current state of the operational environment, the
progress of the campaign or operation, and
vii
Executive Summary
recommendations to account for discrepancies between
the actual and predicted progress. Commanders then
compare the assessment against their vision and intent and
adjust operations to ensure objectives are met and the
military end state is achieved.
There are three
fundamental issues that
any assessment must
address: where are we,
so what and why, and
whats next.
Combine Quantitative
and Qualitative
Indicators
viii
Assessment Process
The assessment process entails three distinct tasks:
continuously monitoring the situation and the progress of
the operations; evaluating the operation against measures
of effectiveness (MOEs) and measures of performance
(MOPs) to determine progress relative to the mission,
objectives,
and
end
states;
and
developing
recommendations/guidance for improvement.
Effective assessment incorporates both quantitative
(observation based) and qualitative (opinion based)
indicators. Human judgment is integral to assessment. A
balanced judgment for any assessment identifies the
information on which to concentrate. Amassing statistics
Executive Summary
is easy. Determining which actions imply success proves
far more difficult due to dynamic interactions among
friendly forces, adaptable enemies, populations, and other
aspects of the operational environment such as economics
and culture. This is especially true of operations that
require assessing the actions intended to change human
behavior, such as deception or stability operations. Using
both quantitative and qualitative indicators reduces the
likelihood and impact of the skewed perspective that
results from an overreliance on either expert opinion or
direct observation.
Incorporate Formal and
Informal Methods
Measures and
Indicators
Developing Measures of
Effectiveness and
Indicators
ix
Executive Summary
conducted concurrent with or shortly following the course
of action development phase of the joint operation
planning process. The intent in developing MOEs and
their associated indicators is to build an accurate baseline
model for determining whether joint and supporting
agency actions are driving target systems toward or away
from exhibiting the desired effects. As strategic and
operational level effects are seldom attained or exhibited
instantaneously, MOEs provide a framework for
conducting trend analysis of system behavior or capability
changes that occur over time, based on the observation of
specific, discrete indicators.
Develop Indicator
Threshold Criteria
Developing The
Assessment Plan
Incorporation into
Plans and Orders
Executive Summary
base plan or order and may be repeated in the Operations
annex. The assessment plan may be included as an
appendix to the Operations annex, or alternatively, in the
Reports annex and should provide a detailed matrix of the
MOEs associated with the identified desired effects as
well as subordinate indicators. The assessment plan
should identify reporting responsibilities for specific
MOE and indicators.
Staff Assessments
During Execution
Effects Assessment
Task Assessment
Deficiency Analysis
xi
Executive Summary
insufficient. Deficiency analysis consists of a structured,
conditions-based process intended to validate that the
staff assessment is accurate, refine the collection
requirements (when required), and conduct task and nodeaction analysis in order to provide initial guidance to
planners for follow-on branch/sequel development or task
plan/operation order refinement.
Joint Doctrine
Operational Implications
Joint doctrine should address considerations related to
assessment. Joint doctrine should continue to expand
current guidance and discussion on how to integrate
interagency and multinational assessment processes and
procedures, particularly in stability and counterinsurgency
type operations. The primary publication for discussion
of assessment in joint publications will transition from JP
3-0, Joint Operations, to JP 5-0, Joint Operation
Planning, during the 2011 revision cycle, with a
significant increase in content for JP 5-0 over the current
discussion.
Other joint doctrine publications with
significant input and/or content concerning assessment
include JPs 2-01.3, Joint Intelligence Preparation of the
Operational Environment, 3-07, Stability Operations,
3-08, Interorganizational Coordination During Joint
Operations, 3-24, Counterinsurgency Operations, 3-33,
Joint Task Force Headquarters, and 3-60, Joint
Targeting. Numerous other joint publications have
assessment-related information included.
Training
Leadership and
Education
xii
CHAPTER I
ASSESSMENT OVERVIEW
Assessment helps the commander ensure that the broad operational approach
remains feasible and acceptable in the context of higher policy, guidance, and
orders."
Vision for a Joint Approach to Operational Design
US Joint Forces Command
6 October 2009
1.
General
I-1
Chapter I
d. The overall assessment is composed of the commanders personal assessment,
the staff assessment, and other assessments/inputs. The focus of this handbook is on the
development of the staff assessment as both a quantitative and qualitative product. The
other components of the overall assessment may include formal and informal assessment
results from subordinate and supporting units and agencies, including multinational and
interagency partners. Additionally, The commanders personal assessment will often be
shaped by a number of venues, including battlefield circulation, key leader engagements,
discussions with other military and civilian leaders, and the commanders sense of the
progress of the operation or campaign. While there is no set formula or process for
developing subjective assessment components, they are necessary to temper the staff
assessment with what Clausewitz referred to as the commanders coup doeil or intuition.
e. Commanders and staffs should attempt to maintain a balance between
quantitative and qualitative measures in assessment. Measuring progress in military
operations is a difficult and often subjective process, particularly in counterinsurgency
and stability operations. To avoid this problem, and because they are more comfortable
with objective results, staffs have a tendency to favor quantitative measures. As such,
there is a danger of over-engineering the assessment process. Staffs often develop
extensive quantifiable assessments that do not always logically or clearly support a
commanders requirement nor assist him in developing guidance and intent.
Commanders and staffs should use caution to avoid confusing measuring activity with
measuring progress. In many cases, quantitative indicators should only serve as a
starting point for commanders and staffs subjective assessments based on observation
and experience.
f. Fundamental to assessments are analyses about progress in designated mission
areas, as measured against the expected progress in those mission areas. These analyses
allow the commander and the staff to determine where adjustments must be made in
operations and serve as a catalyst for future planning. Ultimately, assessment allows the
commander and staff to keep pace with a constantly evolving situation while staying
focused on mission accomplishment.
2.
I-2
Assessmeent Overview
w
Fig
gure I-1. Ass
sessment and the Commanders Decision Cycle
I--3
Chappter I
c There arre three funddamental isssues that anyy assessmentt must addreess: where arre
c.
we, so what, and whats nextt.
(1) Firsst, assessmennt must deteermine wheere we are. The assesssment processs
must examine th
he data receeived and deetermine, inn relation too the desiredd effects, thhe
curreent status of the operatioon and the operational
o
e
environment
t. This is thhe most basiic
and fundamental
f
l question thhat assessm
ment must annswer. Forr the assessm
ment processs
discuussed in this handbook, the
t first stepp in answeriing this quesstion shouldd be relativelly
straigght forward because
b
it will
w be based on the assesssment modeel developedd in planningg.
The measures
m
off effectivenesss (MOE), MOE
M
indicattors, and asssociated criteeria that werre
develloped in the assessmentt planning prrocess will drive the metrics-based status of thhe
effectts. This forrms the objective founddation for thhe cross-funnctional asseessment team
m
who apply their collective judgment,
j
exxperience annd understaanding of thhe operationaal
envirronment to derive
d
an info
formed subjeective interprretation of thhe data. It iss at this poinnt
in thhe assessmen
nt process that sciencce first meets operatiional art. Where thhe
quanttitative and qualitative
q
a
assessment
o the data converges, thhe actual stattus of desireed
of
and undesired
u
efffects is deterrmined.
(2) Thee second funndamental isssue that assessment muust address is so whatt
(i.e., what does th
he data meann and what is
i its significcance)? To answer this question, thhe
assessment team will examinne the MOE
E indicators,, both indiviidually and in relation to
t
I-4
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
Assessment Overview
each other. This is actually the first part of the deficiency analysis process. If a given
effect is not being achieved or achieved in accordance with a desired timeline, the
assessment team must examine the underlying data elements (MOE and MOE indicators)
to determine the potential or suspected reason for the deficiency. The shortfall may be in
the execution of the collection plan, in the actions selected to achieve the desired
effect(s), or due to other environmental factors. Regardless, the story the data is telling
must be determined. A detailed examination and analysis of the indicator data may
reveal where these shortfalls are occurring or areas where actions may be applied to more
successfully achieve the desired effect(s). For example, one of possibly multiple reasons
that the effect Host Nation provides basic human services is not being achieved, might
be related to a measurable decrease in the availability of electricity in a key urban area
(MOE: Increase/decrease in the availability of electricity in key urban areas). One
indicator might be reporting that the number of total kilowatt hours of electricity being
produced at a particular servicing power plant is relatively high or stable. A second
indicator, however, may indicate that transmission line failures for that urban area are
increasing thus negatively impacting the overall availability of electricity (MOE) and the
provision of basic human services (effect). Further examination of additional MOE,
indicators or other intelligence information may suggest whether the transmission line
failures are the result of equipment malfunctions, poor maintenance procedures, or
attacks by local insurgent or criminal groups. Regardless of the answer in this particular
example, the second fundamental requirement for assessment should be clear. A status
report without a detailed examination of the data is of marginal value to the commander.
Assessment needs to answer the so what.
(3) Finally, and perhaps most importantly, assessment must begin to address
the whats next? Assessment must combine the analysis of the where we are and the
so what and develop thoughtful, logical guidance for the commands planning efforts.
This guidance should not take the form of specific or detailing courses of action, but
rather it should identify potential opportunities, areas to exploit, or ways ahead that the
joint planning group (JPG) or operations planning team (OPT) can leverage to initiate
follow-on plan refinement and the development of additional courses of action to present
to the commander. The recommendations that emerge from the assessment process are,
therefore, a hand-off from the assessment team to plans and complete one rotation of the
commanders decision cycle. The final recommendations that are ultimately developed
by the JPG/OPT are typically provided to the commander in the form of a decision brief.
In an observed best practice, some commands introduce the decision brief with a formal
presentation from the assessment team to serve as a scene setter for the planning staffs
recommendations. Alternatively, the formal assessment can be presented to the
commander in a separate forum in order to receive his thoughts and direction regarding
the assessments conclusions and planning recommendations. The commander can use
this presentation as a vehicle to provide additional planning guidance for the follow-on
effort conducted by the JPG/OPT. Regardless of the method chosen to conduct the
exchange between assessment and plans, it is of critical importance that re-integration
occurs with planner involvement in the assessment process and assessor participation in
follow-on planning.
I-5
Chapter I
Monitoring and evaluating are critical activities; however, assessment is
incomplete without recommending or directing action. Assessment may
diagnose problems, but unless it results in recommended adjustments, its use
to the commander is limited.
U.S. Army Field Manual 5-0
The Operations Process
I-6
Assessment Overview
(3) The staff assessment framework consists of effects assessment, task
assessment, and deficiency analysis. Each of these is further discussed in Chapter V,
Staff Assessments During Execution.
b. The use of these terms and the construct discussed in this handbook is neither
authoritative nor prescriptive. They merely represent a common set of references that
should be generic enough to encompass the wide variety of assessment structures and
terms already in use throughout the joint force. As assessment continues to mature and
the processes are further refined and incorporated into joint and Service doctrine, a
common set of terms and processes should ultimately ensue.
4.
Assessment Levels
a. Assessment occurs at all levels and across the entire range of military
operations. These assessments are interrelated and interdependent. Although each level
of assessment may have a specific focus and a unique battle rhythm, together they form a
hierarchical structure in which the conduct of one level of assessment is crucial to the
success of the next (see Figure I-3). Theater-strategic and operational-level assessment
efforts concentrate on broader tasks, effects, objectives, and progress toward the end
state, while tactical-level assessment primarily focuses on task accomplishment. As a
general rule, the level at which a specific operation, task, or action is directed should be
the level at which such activity is assessed. This properly focuses assessment and
collection at each level, reduces redundancy, and enhances the efficiency of the overall
assessment process.
b. Typically, the level or frequency at which assessment occurs should be relative
to the level at which a specific operation, task, or action is directed. Tactical level
headquarters routinely conduct task assessments using MOPs and may look at MOEs in
relation to the assigned or derived effects which support the higher headquarters. These
assessments normally occur relatively frequently and are a focus area with the current
operations staff area. Operational level headquarters and theater-strategic headquarters
tend to focus most of their assessment efforts on effects assessment, and the overall
progress to achieve the objectives and end state. Because the assessment process needs
to support the commanders decision cycle, the frequency of formal assessments must
match the pace of campaign execution.
I-7
Chappter I
Figure I-3. As
ssessment In
nteraction
5.
U of Effeccts
Use
I-8
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
Assessment Overview
c. Assessment supports the determination of the accomplishment or nonaccomplishment of objectives through the detailed study and understanding of changes to
the operational environment. This is usually done by determining the status of objectives
and effects. It attempts to answer the question of are we doing the right things? by
measuring changes to the physical states or behaviors of the systems associated with the
effects under examination. Assessment attempts to measure change (positive or negative
system changes) through the use of MOEs that are relevant, measurable, responsive, and
resourced.
6.
Organization
I-9
Chapter I
b. Starting in mission analysis, the J-2 supports the JFCs decision-making process
through the JIPOE. JIPOE is particularly valuable in identifying and developing MOE
indicators to identify changes in adversary system behavior, capabilities, or the
operational environment. Intelligence collection personnel, as well as analysts, are
particularly important to the assessment team. Their expertise, particularly if applied
early on, can provide insight into whether proposed effects, MOEs, and indicators are
measurable, observable, and relevant, and responsive.
c. Intelligence support to execution-phase assessments is equally important as its
support in the planning phase. Assessment begins as soon as intelligence generated in
support of MOEs and indicators is received. Based on the collection plan, many
indicators will be observable through technical or human intelligence disciplines. These
inputs will usually be provided by the JIOC/joint intelligence support element (JISE) or
through its representatives to the assessment team. Several commands conducting
assessment in joint exercises have benefited from establishing a formal agenda for their
assessment team, opening with a current intelligence summary, then moving to a review
of the status of effects. The assessment team normally focuses on achieving consensus of
the status of each effect and its associated MOE(s) individually. Where additional
intelligence indicates that this assessment may be invalid, the effect and/or MOE(s) are
discussed until an agreement is reached on the current assessment status.
Throughout the operations process, commanders integrate their own
assessments with those of the staff, subordinate commanders, and other
partners in the area of operations. Primary tools for assessing progress of the
operation include the operation order, the common operational picture, personal
observations, running estimates, and the assessment plan. The latter includes
measures of effectiveness, measures of performance, and reframing criteria.
US Army Field Manual 5-0
The Operations Process
8.
The assessment process works best when supported and supporting plans and their
assessments link and relate to each other. As indicated in Figure I-3, each successive
level of assessment is linked to the previous level, either receiving guidance and direction
or providing required information. For instance, the tactical-level assessment plan should
delineate how it links to or supports the operational-level assessment plan. Similarly, the
operational-level assessment plan should delineate the relationship and mechanisms (e.g.
tasks and guidance to subordinate organizations, etc.) by which tactical-level assessment
data can be gathered and synthesized into the operational-level assessment.
I-10
CHAPTER II
ASSESSMENT PROCESS
Assessment has to be based on metrics that make sense. Otherwise, youll be
drawing conclusions that are incorrect.
LTG P. Chiarelli
Commander, Multi-National Corps-Iraq
November 2005
1.
General
II-1
Chapter II
and the echelon of command, assessment may require a detailed process including a
formal assessment plan with dedicated assessment cell or element. Alternatively, it may
be an informal process that relies more on the intuition of the joint force commander,
subordinate commanders, and staffs.
e. When assessing operations, JFCs and staffs should avoid excessive analysis..
As a general rule, the level at which a specific operation, task, or action occurs should be
the level at which such activity is assessed. This focuses assessment at each level and
enhances the efficiency of the overall assessment process.
2.
II-2
Assessment Process
b.
Monitoring
Evaluating
II-3
Chapter II
(2) Criteria in the form of MOEs and MOPs aid in determining progress toward
performing tasks, achieving objectives, and attaining end state conditions. MOEs help
determine if a task is achieving its intended results. MOPs help determine if a task is
completed properly. MOEs and MOPs are simply criteriathey do not represent the
assessment itself. MOEs and MOPs require relevant information in the form of
indicators for evaluation.
(3) MOEs measure changes in conditions, both positive and negative, to help
answer the question, are we doing the right things? MOEs are used at the strategic,
operational, and tactical levels to assess the impact of military operations and measure
changes in the operational environment, changes in system behavior, or changes to
adversary capabilities. MOEs are based on observable or collectable indicators. Several
indicators may make up an MOE, just like several MOEs may assist in assessing progress
toward the achievement of an objective or regression toward a potential crisis or branch
plan execution. Indicators provide evidence that a certain condition exists or certain
results have or have not been attained, and enable decision makers to direct changes to
ongoing operations to ensure the mission remains focused on the end state. MOE
assessment is implicit in the continuous nature of the JIPOE process. Upon the collection
of indicators, JIPOE analysts can compare the baseline intelligence estimate used to
inform the plan against the current situation to measure changes. MOEs are commonly
found and tracked in formal assessment plans. Examples of MOEs for the objective,
provide a safe and secure environment may include:
(a) Decrease in insurgent activity.
(b) Increase in reporting of insurgent activity to host-nation security forces.
(c) Decrease in civilian injuries involving mines and unexploded
ordinance.
(d) Attitude/opinion/behavioral changes in selected populations.
(e) Changes in media portrayal of events
(4) On the other hand, MOPs help answer questions such as, was the action
taken? or were the tasks completed to standard? A MOP confirms or denies that a
task has been properly performed. MOPs are commonly found and tracked at all levels in
execution matrixes. MOPs are also heavily used to evaluate training. MOPs help to
answer the question, are we doing things right?\
(5) In general, operations consist of a series of collective tasks sequenced in
time, space, and purpose to accomplish missions. Current operations cells use MOPs in
execution matrixes and running estimates to track completed tasks. Evaluating task
accomplishment using MOPs is relatively straightforward and often results in a yes or no
answer. Examples of MOPs include:
II-4
Assessment Process
II-5
Chappter II
situattion have been
b
identifiied. Assum
mptions idenntified in thhe planningg process arre
challeenged to dettermine if theey are still valid.
v
(9) A key
k aspect of evaluatioon is determ
mining variances thhe differencce
betweeen the actu
ual situation and what thhe plan foreecasted the situation
s
woould be at thhe
Based on
time or event.
o the signnificance of the variaances, the staff makees
recom
mmendationss to the commander on how to adjust
a
operaations to accomplish thhe
missiion more efffectively (seee Figure II-22).
(10) Evaaluating alsoo includes considering whether
w
the desired connditions havve
changged, are no longer achieevable, or arre not achievvable througgh the currennt operationaal
approoach. This is done byy continuallyy challenginng the key assumptions
a
s made wheen
develloping the operational
o
a
approach
annd subsequennt plan. Whhen a key assumption
a
i
is
invalidated, adjusstments, up to
t an includiing developiing a new plaan, may be in
i order.
F
Figure
II-2. Evaluating
E
V
Variances
d
d.
Recomm
mending or Directing Action
A
(1) Monitoring andd evaluatingg are criticall activities; however, thhe assessmennt
mplete withoout recomm
mending or directing acction. Asseessment maay
proceess is incom
diagnnose problem
ms, but unlesss it also ressults in recoommended adjustments,
a
its use to thhe
comm
mander is lim
mited.
II-6
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
Assessment Process
(2) Based on the evaluation of progress, the staff brainstorms possible
improvements to the plan and makes preliminary judgments about the relative merit of
those changes. Assessment diagnoses threats, suggests improvements to effectiveness,
and reveals opportunities. Staff members identify those changes possessing sufficient
merit and provide them as recommendations to the commander or make adjustments
within their delegated authority. Recommendations to the commander range from
continuing the operation as planned, executing a branch, or making unanticipated
adjustments. Making adjustments includes assigning new tasks to subordinates,
reprioritizing support, adjusting the ISR plan, and significantly modifying the course of
action.
Commanders integrate recommendations from the staff, subordinate
commanders, interagency and multinational partners, and other stakeholders with their
personal assessment. Commanders then decide if and how to modify the operation to
better accomplish the mission.
3.
Quantitative Indicators
II-7
Chapter II
methods, and standards for observing and reporting the events also require judgment.
After collection, the commander or staff decides whether to use the number as an
indicator in a formal assessment plan and for which MOEs or MOPs.
(3) Normally, quantitative indicators prove less biased than qualitative
indicators. In general, numbers based on observations are impartial (assuming that the
events in question were observed and reported accurately). Often, however, these
indicators are less readily available than qualitative indicators and more difficult to select
correctly. This is because the judgment aspect of which indicators validly inform the
MOE is already factored into qualitative indicators to a degree. Experts factor in all
considerations they believe are relevant to answering questions. However, this does not
occur inherently with quantitative indicators. The information in quantitative indicators
is less refined and requires greater judgment to handle appropriately than information in
qualitative indicators.
(4) Public opinion polling can be easily miscategorized. It often provides an
important source of information in prolonged stability operations. Results of a rigorously
collected and statistically valid public opinion poll are quantitative, not qualitative. Polls
take a mathematically rigorous approach to answering the question of what people really
think; they do not offer opinions on whether the people are correct.
(5) While the results of scientifically conducted polls are quantitative, human
judgment is involved in designing a poll. Decisions must be made on what questions to
ask, how to word the questions, how to translate the questions, how to select the sample,
how to choose interviewers, what training to give interviewers, and what mathematical
techniques to use for getting a sample of the population.
d.
Qualitative Indicators
II-8
Assessment Process
However, subjective measures have a higher risk of bias. Human opinion is capable of
spectacular insight but also vulnerable to hidden assumptions that may prove false.
(3) Differentiating between quantitative and qualitative indicators is useful but
signifies a major tendency rather than a sharp distinction in practice.
(a) Quantitative indicators often require a degree of judgment in their
collection. For example, determining the number of mortar attacks in a given area over a
given period requires judgment in categorizing attacks as mortar attacks. A different
delivery system could have been used, or an improvised explosive device could have
been mistaken for a mortar attack. The attack could also have landed on a boundary,
requiring a decision on whether to count it.
(b) Similarly, qualitative indicators always have some basis in observed
and counted events. The same indicator may be quantitative or qualitative depending on
the collection mechanism. For example, the indicator may measure a change in market
activity for village X. If a Soldier observes and tracks the number of exchanges, then the
indicator is quantitative. If the battalion commander answers that question in a mandated
monthly report based on a gut feel, then the indicator is qualitative.
4.
II-9
Chapter II
(2) How will a particular task, objective, end state condition, or assumption be
assessed? What MOEs and MOPs will be used?
(3) What information requirements (indicators) are needed to support a
particular assessment?
(4) Who on the staff has primary responsibility for assessing a particular area?
What is the collection plan?
d. Commanders must be careful, however, not to over assess. Staffs at all levels
can easily get bogged down in developing formal assessment procedures for numerous
tasks and objectives.
Additional numerous reports, questions, and information
requirements from higher headquarters can smother subordinate commanders and their
staffs. Often standard reports, operational and intelligence summaries, and updates by
subordinate commanders suffice. Higher echelons should not ask for something that the
lower echelon does not need for its own purposes.
A common mistake many leaders make is to allow themselves to become too
engrossed in the details, too fascinated by the tactical aspects of the enterprise.
This is understandable since whether it is security matters or sales of a
particular product, the ultimate terminal transactionor tactical level of
execution in military parlanceall tend to be more exciting and draw us in. The
toughest job for the leader, then, is to trust in the strategy, trust in subordinate
leaders, and trust the sensors to do their jobs to report the right information; in
so doing, they should be able to stay out of the thicket of tactical execution.
ADM James G. Stavridis
Partnership for the Americas
November 2010
5.
II-10
Assessment Process
establishing cause and effect must be recorded explicitly and challenged periodically to
ensure they are still valid.
c. In its simplest form, an effect is a result, outcome, or consequence of an action.
Direct effects are the immediate, first-order consequences of a military action unaltered
by intervening events. They are usually immediate and easily recognizable. For
example, an enemy command and control center destroyed by friendly artillery or a
terrorist network courier captured by a direct-action mission. Establishing the link
between cause and effect in the physical domains is usually straightforward, as is
assessing progress.
d. It is often difficult to establish a link or correlation that clearly identifies actions
that produce effects beyond the physical domains. The relationship between action taken
(cause) and nonphysical effects may be coincidental. Then the occurrence of an effect is
either purely accidental or perhaps caused by the correlation of two or more actions
executed to achieve the effect. For example, friendly forces can successfully engage
enemy formations with fire and maneuver at the same time as MISO. MISO might urge
enemy soldiers to surrender. If both these events occur at the same time, then correlating
an increase in surrendering soldiers to MISO will be difficult. As another example,
friendly forces may attempt to decrease population support for an insurgency in a
particular city. To accomplish this task, the unit facilitates the reconstruction of the citys
power grid, assists the local authorities in establishing a terrorist tips hotline, establishes a
civil-military operations center, and conducts lethal operations against high-payoff targets
within the insurgency. Identifying the relative impact of each of these activities is
extremely challenging but is critical for allocating resources smartly to accomplish the
mission. Unrecognized influences completely invisible to assessors can also cause
changes unforeseen or attributed inaccurately to actions of the force.
e. Furthermore, because commanders synchronize actions across the warfighting
functions to achieve an objective or obtain an end state condition, the cumulative effect
of these actions may make the impact of any individual task indistinguishable. Careful
consideration and judgment are required, particularly when asserting cause-and-effect
relationships in stability operations.
6.
II-11
Chapter II
helping the staff link actions and tasks to appropriate and available forms of
measurement. ORSA-trained personnel can also assist planners in developing the
assessment metrics (e.g. effects, measures of effectiveness (MOEs), measures of
performance (MOPs) and indicators).
c. Effective assessment of tasks, effects, and campaigns require regular
collaboration with staff elements within the commandvertically with higher or lower
commands and horizontally across interagency and multinational partners. ORSAtrained personnel can help ensure assessment metrics are nested with both higher and
lower command, alleviating a possible breakdown of the assessment process.
Additionally, while developing the collection plan, ORSA-trained personnel may identify
data already collected by lower-command echelons and other agencies. This prevents
duplicative data collection efforts and decreases the burden on responsible organizations.
7.
(1) The ICAF is a framework that can be used to help people from different US
Government (USG) departments and agencies work together to reach a shared
understanding of a countrys conflict dynamics and consensus on potential entry points
for additional USG efforts. This assessment will provide for a deeper understanding of
the underlying conflict dynamics in your country or region.
(2) ICAF teams are situation-specific and should include department/agency
representatives with relevant technical or country expertise. ICAF teams are often co-led
by the Conflict Prevention division of US Department of States Office of the
Coordinator for Reconstruction and Stabilization (S/CRS) and USAIDs Office for
Conflict Management and Mitigation (CMM) because people in those offices have
conflict assessment expertise, but anytime two or more departments/agencies want to
II-12
Assessment Process
conduct an ICAF, they may do so. Unless they have conflict assessment experience,
however, they should request assistance from S/CRS Conflict Prevention or USAID
CMM.
(3)
future USG
make direct
assessment.
ways:
(a) Results from sectoral assessments performed in the past provide data
that is fed into the ICAF;
(b) During a situation assessment, the results of an ICAF identify sectors
most critically in need of an in-depth sectoral assessment prior to planning; or
(c) After an ICAF is conducted and a plan has been created, sectoral
assessments are conducted to assist in the design of programs.
(4) When members of the interagency perform a conflict/instability assessment
together, they reach a shared understanding of the conflict dynamics. The ICAF has been
developed by the interagency community and has interagency acceptance. Using the
ICAF, members of an interagency team are able to focus their discussion on the conflict
they are analyzing and avoid being caught up in a disagreement on the process they are
using to analyze the conflict.
(5) The USG departments/agencies most likely to participate in the use of the
ICAF are agencies with responsibilities for planning or programming foreign assistance
funds or other international engagements. However, on occasion, USG agencies
implementing domestic programs may have technical or country expertise to contribute
to an ICAF even if they do not have international programs.
For more information, see Appendix D and Interagency Conflict Assessment Framework
at: http://www.crs.state.gov/shortcut.cfm/C6WW.
e.
(1) The MPICE framework is a catalog of metrics and a process for using
these metrics to measure the progress of stabilization and reconstruction missions in
conflict environments. MPICE metrics measure the conditions that support viable peace.
This peace is achieved when the capacity of domestic institutions to resolve disputes
peacefully overtakes the powerful motives and means for continued violent conflict.
When this state is achieved, external intervention forces can begin to hand over stability
efforts to domestic institutions.
(2) MPICE includes about 800 generic, quantitative outcome metrics that
measure institutional capacities and drivers of conflict in five sectors: safe and secure
II-13
Chapter II
environment, political moderation and stable democracy, rule of law, sustainable
economy, and social well-being. This comprehensive set of outcome metrics (measures
of effectiveness) enables planners to assess mission progress in an objective, systematic,
and holistic way.
(3) Development of MPICE was sponsored by the Department of Defense,
United States Institute of Peace, U.S. Agency for International Development,
Department of State, and other U.S. government agencies in cooperation with
multinational, non-governmental organization (NGO), and academic partners.
For more information on MPICE, see http://www.stottlerhenke.com/mpice/#.
f.
(1) The District Stability Framework (DSF) is a methodology designed for use
by both military and civilian personnel to identify the underlying causes of instability and
conflict in a region, devise programs to diminish the root causes of instability and
conflict, and measure the effectiveness of programming. It is employed to gather
information using the following lenses: operational environment, cultural environment,
local perceptions, and stability/instability dynamics. This information then helps
identify, prioritize, monitor, evaluate, and adjust programming targeted at diminishing the
causes of instability or conflict.
(2) The DSF has four major components: gaining situational awareness (from
the four lenses of data mentioned above); analyzing that data; designing effective
programming based on that analysis; and monitoring and evaluating programming.
(3) USAID conducts training for deploying personnel on DSF. Wherever
possible, USAID seeks to raise awareness of development and conflict mitigation and to
help preempt these issues before military and civilian personnel are sent into hostile areas
in reaction to them.
Refer to http://www.usaid.gov/our_work/global_partnerships/ma/dsf.html for more
information.
g.
II-14
Assessment Process
progress and accomplishments against standardized benchmarks. Used in its entirety, the
CJSART holistically examines a countrys laws, judicial institutions, law enforcement
organizations, border security, and corrections systems as well as a countrys adherence
to international rule of law standards such as bilateral and multilateral treaties.
For more information on CJSART or request a copy, contact the US Department of State,
Bureau of International Narcotics and Law Enforcement Affairs, at (202) 647-5171.
h.
(1) In response to a request by DOD and building on work done by CMM and
the Central Intelligence Agencys Office of Military Affairs, USAID created the Tactical
Conflict Assessment Framework. Adapted by the US Army as the Tactical Conflict
Assessment and Planning Framework (TCAPF), it is a standardized diagnostic tool
designed for use by both military and civilian personnel. It is employed to gather
information from local inhabitants to identify the causes of instability or conflict in
tactical areas of operation. This information helps identify, prioritize, monitor, evaluate,
and adjust civil-military programming targeted at diminishing the causes of instability or
conflict. The TCAPF has four major components:
(a) Identifying causes of instability/conflict
II-15
Chapter II
(b) The local context
(c) Gathering information
(d) Designing effective programming
(2) The TCAPF training also includes a detailed case study based on a real
situation in a West African area in which trainees are tasked with identifying the causes
of instability in the country and designing effective programs to mitigate them.
For further discussion on other assessment frameworks, see Appendix D for a discussion
of the Interagency Conflict Assessment Framework, Appendix E for information on the
Tactical Conflict Assessment and Planning Framework, and Appendix F for information
on the NATO Operations Assessment Handbook.
II-16
CHAPTER III
ASSESSMENT COMPONENTS
Within the commanders decision cycle, assessment is the determination of the
effect of operations as they relate to overall mission accomplishment.
Fundamental to assessment are judgments about progress in designated
mission areas as measured against the expected progress in those same
mission areas. These judgments allow the commander and the staff to
determine where adjustments must be made to operations and serve as a
catalyst for planning. Ultimately, assessment allows the commander and staff
to keep pace with a constantly evolving situation while staying focused on
mission accomplishment
Naval Warfare Publication 3-32,
Maritime Operations at the Operational Level of War
1.
General
III-1
Chappter III
2.
O
Objectives
and
a Effects
III-2
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
Assessmentt Componentts
Figure
e III-2. Notio
onal Assessm
ment Structure
III--3
Chapter III
(2) The proximate cause of effects in interactively complex situations can be
difficult to predict. Even direct effects in these situations can be more difficult to create,
predict, and measure, particularly when they relate to moral and cognitive issues (such as
religion and the mind of the adversary respectively). Indirect effects in these situations
often are difficult to foresee. Indirect effects often can be unintended and undesired
since there will always be gaps in our understanding of the operational
environment. Commanders and planners must appreciate that unpredictable third-party
actions, unintended consequences of friendly operations, subordinate initiative and
creativity, and the fog and friction of conflict will contribute to an uncertain operational
environment.
(3) The use of effects in planning can help commanders and staff determine the
tasks required to achieve objectives and use other elements of operational design more
effectively by clarifying the relationships between centers of gravity (COGs), lines of
operation (LOOs) and/or lines of effort, decisive points, and termination criteria. This
linkage allows for efficient use of desired effects in planning. The commander and
planners continue to develop and refine desired effects throughout the joint operation
planning. Monitoring progress toward attaining desired effects and avoiding undesired
effects continues throughout execution.
For more information on objectives and effects, see JP 5-0, Joint Operation Planning.
3.
III-4
Assessment Components
(b) Dispute resolution mechanisms exist and are being used to clarify or
resolve remaining vital issues among parties to the conflict.
(c) Percent of military-aged population that expresses an inclination to
support or join a violent faction (by identity group).
(d) Degree to which members of formerly warring factions and competing
identity groups can travel freely in areas controlled by their rivals.
(e) Detainees/prisoners are subjected to torture, cruel, or inhuman
treatment, beatings or psychological pressures (by identity group).
(f) Safe and sustainable return of displaced persons and refugees to former
neighborhoods.
(g) Estimated percentage of gross domestic product accounted for by illicit
economic transactions.
(h) Level of public satisfaction with electrical power delivery (by identity
group and region).
III-5
Chapter III
(i) Perception that ethnic identity polarizes society (by identity group).
(j) Perception of heads of households that, under normal conditions, they
are able to meet their food needs either by growing foodstuffs/raising livestock or
purchasing food on the market.
(2) MOPs. They are generally quantitative, but also can apply qualitative
attributes to task accomplishment. MOPs are used in most aspects of combat assessment,
since it typically seeks specific, quantitative data or a direct observation of an event to
determine accomplishment of tactical tasks. But MOPs have relevance for noncombat
operations as well (e.g., tons of relief supplies delivered or noncombatants evacuated).
MOPs also can be used to measure operational and strategic tasks, but the type of
measurement may not be as precise or as easy to observe.
b. The assessment process and related measures should be relevant, measurable,
responsive, and resourced so there is no false impression of accomplishment.
Quantitative measures can be helpful in this regard.
(1) Relevant. MOPs and MOEs should be relevant to the task, effect,
operation, the operational environment, the end state, and the commanders decisions.
This criterion helps avoid collecting and analyzing information that is of no value to a
specific operation. It also helps ensure efficiency by eliminating redundant efforts.
(2) Measurable. Assessment measures should have qualitative or quantitative
standards they can be measured against. To effectively measure change, a baseline
measurement should be established prior to execution to facilitate accurate assessment
throughout the operation. Both MOPs and MOEs can be quantitative or qualitative in
nature, but meaningful quantitative measures are preferred because they are less
susceptible to subjective interpretation.
(3) Responsive. Assessment processes should detect situation changes quickly
enough to enable effective response by the staff and timely decisions by the commander.
The JFC and staff should consider the time required for an action or actions to produce
desired results within the operational environment and develop indicators that can
respond accordingly. Many actions directed by the JFC require time to implement and
may take even longer to produce a measurable result.
(4) Resourced. To be effective, assessment must be adequately resourced.
Staffs should ensure resource requirements for data collection efforts and analysis are
built into plans and monitored. Effective assessment can help avoid both duplication of
tasks and unnecessary actions, which in turn can help preserve combat power.
WHY METRICS MATTER
To end this insurgency and achieve peace, we may need more than just extra
troops, new resources and a new campaign plan: as General McChrystal has
emphasized, we need a new operational culture. Organizations manage what
III-6
Assessment Components
they measure, and they measure what their leaders tell them to report on.
Thus, one key way for a leadership team to shift an organizations focus is to
change reporting requirements and the associated measures of performance
and effectiveness...
Metrics must be meaningful to multiple audiences, including NATO
commanders, intelligence and operations staffs, political leaders, members of
Parliament and Congress in troop-contributing nations, academic analysts,
journalists and most importantly ordinary Afghans and people around the
world.
Dr. David Kilcullen
Measuring Progress In Afghanistan
Kabul, December 2009
III-7
Chapter III
experience, and expertise for determining and selecting specific MOPs relative to the
assigned task(s) reside.
c. Like MOEs, MOPs should demonstrate particular characteristics. They are tied
to tasks and task assessment, therefore, they should be appropriate to the assigned task or
set of related tasks. They should be measurable and are generally focused on the
immediate results of tactical actions. They are designed to answer whether a task or
related set of tasks was conducted or conducted successfully, whether it/they need to be
conducted again, whether the tasked organizations are doing things right. Depending
on the type of DIME action employed, MOP may measure the delivery of lethal fires on a
key node, the capture or killing of a high value individual, the issuance of a diplomatic
demarche, informal contact with tribal or local leaders, the level of completion of a set of
tasks related to security operations or the level of progress regarding completion of
economic recovery programs. For some tasks, the MOPs are relatively simple; yes or no.
(e.g., Was the target hit? Was the demarche issued?) For other tasks, the MOPs may
more complex. (e.g., What percentage of tribal leaders have been contacted or engaged?
What percentage of security operations or economic recovery programs has successfully
been completed?) In instances where the tasks are more complex or are grouped together
as a collection of related tasks (i.e., the completion of security operations or economic
recovery programs) several individual and distinct milestones may need to be employed
as MOP criteria. Ultimately, however, MOPs are used to determine the status of tasks,
LOOs, or operations, activities, and actions, conducted to achieve behavioral changes
(effects) in adversary or neutral systems.
d. One source available to develop task performance measures is the universal joint
task list (UJTL) or Service-specific task lists to develop task performance measures.
Forces can use the these lists as a baseline to develop mission tasks and corresponding
measures. Applied to the tasks, purposes and conditions present at the time of planning,
these task lists can facilitate MOP creation. Measures of performance developed from
these lists can be modified as appropriate to the particular operation. As COAs are
developed and analyzed during the planning process, the tasks and purposes of
subordinate commands are identified to determine what their mission essential tasks may
be for mission success. For example, an OPT identifies a task during COA development
to isolate the operational area with the purpose of decreasing threat effectiveness (by
constraining the threats ability to resupply). The OPT further develops this COA during
COA analysis (wargaming).
e. Upon completion of the baseline system of system analysis or upon completion
of course of action development and selection, the JPG/OPT updates the operation plan
(OPLAN)/operation order (OPORD) to reflect the resource assigned to each action (the
identification of resources may be limited to identification at the component or
interagency level). For each resulting action, the tasked resource (command or agency)
then develops MOP for assessing its progress in completing assigned actions against key
nodes, and identifies collection requirements for assessing MOP. For military task
assessment, MOP status is normally maintained by the designated organization until
required for effect-to-task comparison, although periodic reports for other purposes (e.g.,
III-8
Assessment Components
branch/sequel development) may be required by the CCMD or joint task force (JTF)
headquarters. Reporting responsibilities for nonmilitary tasks are coordinated between
the CCMD/JTF and applicable interagency and multinational representatives.
f. With respect to the non-military actions coordination may need to occur
between the CCMD/JTF staff and the State Department representatives or applicable
American Embassy staff to ensure visibility on diplomatic actions associated with the
establishment of key public Ministries or Departments. Similarly, coordination may need
to occur with the Treasury Department, Embassy staff, or USAID to gain insight into
actions associated with the administration of humanitarian aid or economic development
packages. In many cases, this coordination will occur through the CCMD staff or
through the joint interagency coordination group. In other cases, the JTF staff may have
been authorized more direct access.
g. MOP status is normally reported in a summary format (stoplight, gumball, or
thermograph/bar chart) as RED, AMBER, or GREEN. MOP status may be based
on percentage of planned activity completed, attainment of specific milestones, battle
damage assessment reports, or combat assessments. For assessment and deficiency
analysis purposes, MOP status should reflect action accomplishment status, with an
assessment of GREEN reserved for completion of the action. Commands which have
experimented with using multiple criteria for MOP status reports (i.e., defining
GREEN as action completed and/or on schedule) have experienced delays in
determining or interpreting the basis for MOP status reports, resulting in a difficult and at
times confusing deficiency analysis process. Some commands have used RED to
indicate an action is off-plan or unsuccessful; AMBER to indicate a task is on plan, but
not yet completed; and GREEN to indicate an action that is complete. This type of
rating scheme allows the component to tell the higher headquarters commander I am
RED. I cant do what I am supposed to do and need help. When told that by a
component, the commander either accepts the risk, changes the parameters, or reallocates
assets to assist the component in completing the MOP in a way that supports the overall
task they were given.
5.
a. The development of MOEs and indicators for desired and undesired effects can
commence immediately after the identification of desired and undesired effects while
MOPs and task metric development is normally conducted concurrent with or shortly
following the course of action development phase of the JOPP. Since the intent of the
MOE and indicators is to build an assessment model rather than a COA, the development
of MOEs and indicators is not dependent upon which key nodes are selected for action.
While MOPs are normally developed by the tasked component or resource, development
of MOEs and their associated indicators and assessment criteria is typically the
responsibility of the JPG/ OPT, or, when established, the assessment team. The intent in
developing MOEs and their associated indicators is to build an accurate baseline model
for determining whether joint and supporting agency actions are driving target systems
toward or away from exhibiting the desired effects. As strategic and operational level
III-9
Chapter III
effects are seldom attained or exhibited instantaneously, MOEs provide a framework for
conducting trend analysis of system behavior or capability changes that occur over time,
based on the observation of specific, discrete indicators.
b. In developing MOEs and indicators for effects, the assessment team relies
heavily upon the expertise of J-5/J-3 planners, ISR planners, J-2 personnel and JIPOE
analysts, interagency and multinational representatives, and other subject matter experts
to ensure that MOEs and indicators for a particular effect are observable and will provide
a reliable assessment. Where possible, the assessment team associates nodes with
specific indicators in order to focus ISR planning and collection efforts. Additional
information required for indicator criteria development that is not available through
collaboration is submitted as a request for information. Upon completion of MOE and
indicator development, appropriate indicators are provided to the ISR planner, who
coordinates with the JIOC to align these indicators against the specific ISR assets or
disciplines that will be tasked. Reporting responsibilities and periodicities are then
established by the JTF J-2/JISE and CCMD J-2/JIOC and promulgated in a collection
plan or OPORD annex.
c. MOEs and initial/draft indicators are developed for each effect individually.
The following procedures are not prescriptive and should be tailored for individual
command requirements and time constraints. Several commands have found it useful to
convene a small, ad hoc working group to develop a draft set of MOEs and indicators for
review by the more inclusive, formally established assessment team. See Figure III-4 for
an outline of the process steps.
(1) Step 1: Analyze the Desired Effect. Prior to developing MOE, the
assessment team analyzes the desired effect to ensure there is a common understanding of
the desired/undesired behavior or capability the effect describes, and how the
desired/undesired behavior or capability would likely be exhibited by the specific target
system (particularly if the effect is phase-specific). A common understanding of intent is
critical to ensuring that the associated MOE reflect activities that, when analyzed, will
accurately depict effect attainment status during plan development or OPORD execution.
References that may assist in analyzing the effect include the detailed effect description,
red team summaries, and political, military, economic, social, infrastructure, and
information (PMESII) system summaries. If the effect is deemed unclear after
assessment team review (ambiguous wording, description of dual-behaviors, etc.), the
assessment team recommends modifications to the JPG/OPT.
III-10
Assessment Components
Chapter III
MOEs are evaluated as a group against the effect. The assessment team must reach
consensus that, given information is available for each of the refined MOEs, the refined
MOEs as a group would allow for an accurate assessment of the effect. If the MOEs are
deemed insufficient, additional MOEs must be developed, or the effect must be refined or
discarded.
Assessment encompasses all efforts to evaluate effects and gauge progress
toward accomplishment of effects and objectives. It also helps evaluate
requirements for future action. It seeks to answer two questions: How is the
conflict going? and what needs to be done next? Contrary to many common
depictions and descriptions, assessment is not really a separate stage of
planning or tasking processes. Rather, it is interleaved throughout planning
and execution and is integral to them, since it works together with planning to
determine future courses of action and is conducted in large part during
execution.
Air Force Doctrine Document 2
(4) Step 4: Develop MOE Indicators. In this step, indicators are developed
for those MOEs refined in step 3. Considering each MOE individually, the assessment
team identifies specific discrete indicators that would allow an assessment as to the level
of activity described by the MOE under consideration (for example, indicators for an
MOE of increase/decrease in out-of-cycle military activity may include aircraft sortie
rates, force deployment status, etc.). Indicators must be measurable (at least
potentially, subject to later confirmation by collection analysts), directly related to the
activity identified by the MOE, and appropriate given knowledge of the target system or
systems. Additionally, indicators must provide data that would indicate a change in
MOE in sufficient time for the assessment to be of use for the commanders decision
cycle. At the operational level, some effects may be created only over a lengthy period of
time, and changes in data for the most reliable associated indicators may only be
measured sporadically or very gradually. In these cases, consideration should be given to
developing or identifying additional indicators that, while perhaps less reliable, may
provide more timely interim changes. Where possible, MOEs should be tied to specific
nodes to assist in collection planning. As during step 2, some commands have found it
useful to follow an initial ad hoc brainstorming session with a round-robin solicitation of
system specific suggestions for each PMESII area, each warfare discipline/functional
area, and inter-agency representatives. If no measurable indicators can be identified that
would provide an accurate assessment of the change in condition identified by the MOE
(considering the attributes of the target system being assessed), the MOE under
consideration is discarded. One source for assistance in developing MOEs and indicators
is the United States Institute for Peaces Measuring Progress in Conflict Environments
(MPICE), which provides good examples of MOEs and indicators that have been vetted
by the interagency, cover all five sectors of stability operations, and address both drivers
of conflict and institutional performance in dealing with them.
(5) Step 5: Evaluate MOE Indicators. Following indicator development,
indicators are evaluated as a group. During group indicator evaluation, the assessment
team must reach consensus that, given information is available for each of the refined
III-12
Assessment Components
indicators, the indicators as a group would allow for an accurate assessment of the MOE.
If the indicators are deemed insufficient, additional indicators must be developed, or the
MOE must be refined or discarded.
(6) Step 6: Rank MOEs. The next step in the MOE development process is to
rank the MOEs for the effect under consideration in preparation for MOE reverse order
review. Preferably, MOEs for a given effect are assessed against a common set of
independent criteria, then ranked based upon the results of that assessment (commonly
used criteria include observable, timely, and level of direct relationship to the effect).
(7) Step 7: Reverse Order Review. Having ranked the MOEs, the final step
in developing MOEs is to conduct a reverse order review to ensure that only those MOEs
that are actually required (with an acceptable level of risk) to assess the effect are utilized
in the effect assessment model, both to streamline the effect assessment process and to
conserve ISR resources. In this step, the lowest ranking MOE is temporarily discarded;
the assessment team then evaluates the remaining MOEs against the effect. If the
assessment team reaches consensus that the remaining MOEs as a group would allow for
an accurate assessment of the effect and that use of the remaining MOEs alone would not
present an unacceptable level of risk of deception, the lowest ranked MOE is discarded.
This process is repeated with each remaining MOE until the assessment team determines
that all remaining MOEs are required.
(8) Step 8: Weight MOEs. In order to complete MOE development, the
MOEs require weighting criteria. When this occurs, the MOEs are weighed against each
other based on their relative importance in assessing the associated effect. The
assignment of weight may be based on a subjective informed analysis of the selected
MOE (i.e., a given MOE is considered to be of greater significance than another), or it
may be based on a more precise knowledge of the system under assessment. In the
absence of either a subjective or objective basis to apply weighting criteria, all MOEs for
a given effect may be weighted equally. Upon completion of this step, indicator criteria
development begins.
For an example of MOE and MOE indicator development, see Appendix C, Measure of
Effectiveness and Indicator Development.
6.
III-13
Chapter III
(1) Step 1: Review Indicators. The initial step in the threshold development
process is to ensure that the MOE under consideration clearly identifies the activity that is
being measured. When a common understanding of the MOE is gained, the indicators
can be better developed to support the MOE. They are reviewed individually to ensure
that they are measurable and are directly related to the MOE. The indicators are then
reviewed to ensure that they are relevant, responsive, and can be efficiently resourced.
Indicators are not considered measurable is data will not be available at their required
periodicities. They should also collectively provide sufficient coverage of the MOE
under consideration and there is sufficient cross-verification of indicators to ensure
accuracy and validity and to reduce risk of manipulation (friendly or adversary). If the
indicators are insufficient to allow for MOE status determination, additional indicators
must be developed, or the MOE must be refined or discarded.
III-14
Assessment Components
III-15
Chapter III
(4) Step 4: Review in Reverse Order. Having ranked the indicators, conduct
a reverse order review to ensure that only those indicators that are actually required (with
an acceptable level of risk) to assess the MOE are tasked for collection. As with the
reverse order MOE review, the lowest ranking indicator is temporarily discarded; the
assessment team then evaluates the remaining indicators against the MOE. If the
assessment team reaches consensus that the remaining indicators as a group would allow
for an accurate assessment of the MOE and that use of the remaining indicators would
not present an unacceptable level of risk of deception, the lowest ranked indicator is
discarded. This process is repeated with each successive indicator until the assessment
team determines that all remaining indicators are required.
(5) Step 5: Weight Indicators. In preparation for populating the assessment
model and data management tool to be used during assessment execution, the assessment
team weights the indicators against each other based on their relative importance in
assessing MOE thresholds. The assignment of weights is a subjective process; as the data
reports during the assessment process provide only a starting point for analysis by the
assessment cell; all indicators for a given MOE may be equally weighted barring any
obvious difference in importance.
(6) Step 6: Repeat Process for Remaining MOE. The indicator criteria
development process is conducted for each MOE individually; as the process is
completed for an MOE, it is repeated for each successive MOE.
(7) Step 7: Pass Results to the Collection Manager. Upon completion of
MOE/indicator planning, indicators developed by the assessment team are provided to the
ISR Planner, who coordinates with the JTF J-2/JISE/JIOC to include indicators in the
collection plan and align specific ISR or collection assets against them, as appropriate.
Not all indicators will require the collection manager to apply assets against them as
part of the collection plan.
(8) Step 8: Populate Assessment Model. Some commands have successfully
employed standard spreadsheets formatted with embedded macros as a means to store
assessment parameters and capture assessment-related data. Others have used software
support applications to facilitate assessment planning and execution. Regardless of the
mechanism, the assessment model should be completed and populated prior to the start of
operations.
7.
Considerations
The volume of information itself becomes a form of friction, precipitating
confusion, lengthening decision times, and diminishing predictive awareness.
Some of this can be mitigated by comprehensive intelligence and assessment
planning before operations begin.
Air Force Doctrine Document 2
III-16
Assessment Components
Although these procedures include a methodology to inhibit the unnecessary and
unproductive development of excessive MOE and associated indicators, there is a
potential tendency toward MOE and indicator proliferation. Should a large number of
MOE and indicators become a part of the assessment plan, effective data analysis will be
a challenge and the focus on the actual mission effects and objectives may become lost.
In these instances, the staff may be overwhelmed with the amount of data being
measured, with the end result that the assessment becomes a casualty of the process. In
other words, if the staff is measuring everything, they may find themselves assessing
nothing. For operational level planning purposes, eight to twelve effects are a realistic
baseline to support the direction of DIME actions and the assessment. Additionally, four
to six MOEs per effect and four to six indicators per MOE have proven to be a successful
framework to support the assessment process. However, the actual number of effects,
MOE and indicators should be based on the mission objectives and mission requirements
and not preconceived restrictions.
III-17
Chapter III
Intentionally Blank
III-18
CHAPTER IV
DEVELOPING THE ASSESSMENT PLAN
A critical element of the commanders planning guidance is determining which
formal assessment plans to develop. An assessment plan focused on the end
state often works well. It is also possible, and may be desirable, to develop an
entire formal assessment plan for an intermediate objective, a named operation
subordinate to the base operation plan, or a named operation focused solely on
a single line of operations or geographic area. The time, resources, and added
complexity involved in generating a formal assessment plan strictly limit the
number of such efforts.
US Army Field Manual 5-0, The Operations Process,
March 2010
1.
General
a. Planning for assessment begins during mission analysis when the commander
and staff consider what to measure and how to measure it in order to determine progress
toward accomplishing a task, creating an effect, or achieving an objective. Commanders
and their staffs use assessment considerations to help guide operational design because
these considerations can affect the sequence and type of actions along LOOs and/or lines
of effort. Early and continuous involvement of assessment planners in joint operation
planning helps to ensure assessment is relevant to the plan (see Appendix C, Assessment
Development During The Joint Operation Planning Process).
b. Friendly, adversary, and neutral DIME actions in the operational environment
can significantly impact military planning and execution. Assessment can help to
evaluate the results of these actions. This typically requires collaboration with other
agencies and multinational partnerspreferably within a common, accepted processin
the interest of unified action. For example, failure to coordinate overflight and access
agreements with foreign governments in advance or to adhere to international law
regarding sovereignty of foreign airspace could result in mission delay, failure to meet
US objectives, and/or an international incident. Many of the organizations with which
coordination is needed may be outside the JFCs authority. Accordingly, the JFC should
grant some joint force organizations authority for direct coordination with key outside
organizationssuch as interagency elements from the Department of State or the
Department of Homeland Security, national intelligence agencies, intelligence sources in
other nations, and other combatant commandsto the extent necessary to ensure timely
and accurate assessments.
c. Developing the assessment plan is a continuous process that is refined
throughout all planning phases and will not be completed until the OPLAN/OPORD is
approved and published. The building of an assessment plan, including the development
of collection requirements, normally begins during mission analysis after identification of
the initial desired and undesired effects (see Figure IV-1). This identification process,
which is supported by the development during JIPOE of a systems perspective of the
IV-1
Chappter IV
operaational envirronment, will often conntinue througgh COA devvelopment and
a selectionn.
Expertise from outside orgganizations, agencies, or
o external centers
c
of excellence
e
i
is
desireed, but may also extend assessment plan developpment timeliines.
IV-2
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
a. Commanders and staffs develop assessment plans during planning using the six
steps identified in Figure IV-2. Once commanders and their staffs develop the
assessment plan, they apply the assessment process of monitor, evaluate, and recommend
or direct continuously throughout the remainder of joint operation planning and/or
execution.
b. Step 1 Gather Tools and Assessment Data. Joint operation planning begins
when an appropriate authority recognizes a potential for military capability to be
employed in response to a potential or actual crisis. At the strategic level, that
authoritythe President, Secretary of Defense (SecDef), or Chairman of the Joint
Chiefs of Staff (CJCS)initiates planning by deciding to develop military options.
The Guidance for Employment of the Force, Joint Strategic Capabilities Plan, and
related strategic guidance documents (when applicable) serve as the primary guidance to
begin deliberate planning. CCDRs and other commanders also initiate planning on their
own authority when they identify a planning requirement not directed by higher
IV-3
Chapter IV
authority. Additionally, analyses of developing or immediate crises may result in the
President, SecDef, or CJCS initiating military planning through a warning order or other
planning directive. Military options normally are developed in combination with other
nonmilitary options so that the President can respond with all the appropriate instruments
of national power. Whether or not planning begins as described here, the commander
may act within approved authorities and rules of engagement (ROE) in an immediate
crisis. Staffs begin updating their running estimates and gather the tools necessary for
mission analysis and continued planning. Specific tools and information gathered
regarding assessment include, but are not limited to:
(1) The higher headquarters plan or order, including the assessment annex if
available.
(2) If replacing a unit, any current assessments and assessment products.
(3) Relevant assessment products (classified or open-source) produced by
civilian and military organizations.
(4) The identification of potential data sources, including academic institutions
and civilian subject matter experts.
c.
IV-4
IV-5
Chapter IV
governments. The chief of staff aggressively requires staff principals and subject matter
experts to participate in processing the formal assessment and in generating smart,
actionable recommendations.
g.
Incorporating the assessment plan into the appropriate plans and/or orders is the
recommended mechanism for providing guidance and direction to subordinate
organizations or requests for key external stakeholder assistance and support. Desired
and undesired effects are most effectively communicated in the main body of the base
plan or order and may be repeated in the Operations annex (Annex C). The assessment
plan may be included as an appendix to the Operations annex, or alternatively, in the
Reports annex and should provide a detailed matrix of the MOEs associated with the
identified desired effects as well as subordinate indicators (see Figure IV-3 for an
example). Effect description, if not included in the base order, should be included as
well. Criteria for the establishment of MOE and indicator status thresholds (i.e., good
and bad or positive , negative, or no change) should also be identified along with any
weighing requirements applied to individual MOE or indicators. The assessment plan
should identify reporting responsibilities for specific MOE and indicators. Although not
formally included in the assessment plan, approved MOE indicators also should form a
key element of the collection plan detailed in the Intelligence Annex (Annex B).
Changes to MOEs and/or MOE indicators or associated criteria are directed by
fragmentary orders (FRAGORDs) and may be referenced on the supported commands
webpage (if developed).
IV-6
4.
Organization
IV-7
Chapter IV
significant aspects of the operational environment. Intelligence and analytic expertise is
essential in selecting the proper MOEs, indicators, and associated criteria levels relative
to the desired effects. Additionally, if the required expertise is not resident within the
command or joint force, requests for information and/or outreach to interagency and
multinational partners or centers of excellence may be required. Intelligence support and
expertise will also be critical to ensure that assessment indicators are a part of the
commands collection plan.
c. Responsibilities for conducting assessments are normally assigned to an AC (or
similar assessment-focused staff element), operating under the direction of a specifically
designated representative of either the commands J-3 or J-5 and often designated as the
assessment supervisor. The AC may operate either full-time or convene periodically,
depending upon the level of effort required for a given operation.
(1) The AC must be sufficient in size to coordinate efforts and manage
information in developing staff assessments, but not so large that it takes on the entirety
of the assessment function with the increased tendency to develop additional burdensome
reporting requirements to independently build a stovepiped assessment.
(2) Proper placement of the AC is also important, and should take into account
appropriate staff oversight and integration with the entire staff. Observations indicate the
potential for the AC to take on the focus of the particular staff directorate with which it is
associated. For example, if it resides in the J-2, it could have more of an intelligence
collection or enemy focus, in J-3 an operational execution focus, and in J-5 a plans focus.
Likewise, if it is directly subordinate to the COS, it may not have sufficient principal staff
oversight. However, the most prevalent location observed for the AC (or similar) is
within the joint forces J-5, with clear direction that assessment is a staff-wide function.
d. Responsibilities of the AC typically include the initial collation and analysis of
indicator data, the evaluation of the collected data in terms of effects status (including
initiation of the deficiency analysis process where appropriate), and development of
potential recommendations for the JPG/OPT. AC core membership normally includes an
ISR planner and/or collection manager, intelligence planner, political-military planner,
functional area planners, information operations planner, interagency staff
representatives, and special technical operations planner(s). Additional members of the
AC may include JIPOE analysts, representatives from subordinate and supporting
headquarter staffs, and representatives from interorganizational partners, as needed.
e. Assessment works best when supported and supporting plans and their
assessments link and relate to each other. Coordination during planning between the
planning staffs at various levels, to include interorganizational partners, to link and relate
assessment plans will improve relevance and streamline analysis during plan execution.
IV-8
CH
HAPTER V
STAFF ASSESSMEN
NTS DURIN
NG EXECU
UTION
If you know
w the enemy and know yo
ourself, yourr victory will not stand in
n doubt; if
y know He
you
eaven and know
k
Earth, you
y may ma
ake your victtory complete.
Sun Tzu
1.
G
General
a As part of the overaall assessmennt (see Figurre V-1), the staff assessm
a.
ment attemptts
to meeasure the prrogress towaards or awayy from the achievement
a
of desired conditions.
c
It
shoulld begin as soon
s
as inforrmation conncerning MO
OPs, MOEs, and associatted indicatorrs
are reeceived. Assessment may
m even beegin prior too the executtion of plannned tasks, in
i
orderr to validate indicator crriteria threshholds and to develop baselines for trend
t
analysiis
whenn joint tasks are initiated. For exam
mple, if an identified inndicator is related
r
to thhe
averaage daily hou
urs of electrrical power available
a
in key
k host nattion urban arreas, baselinne
data collection
c
sh
hould begin during the pre-executio
p
on phase to provide
p
betteer granularitty
and fidelity
f
to po
ost-executionn assessmentts.
V--1
Chappter V
bb. While variations
v
exxist, staff asssessment is conducted in three disstinct phasess:
effectts assessmen
nt, task assesssment, and,, if needed, deficiency analysis
a
(seee Figure V-2).
This chapter will explore eacch phase andd their relatioonships.
A
Assessment
Periodicityy
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
3.
Effects Assessment
a. Effects assessment assesses those desired effects required to affect friendly and
adversary behavior and capability to conduct and/or continue operations and/or actions.
Effects assessment is broader than task assessment and at the operational level supports
the determination of the achievement of objectives through the detailed assessment of the
associated effects. Effects provide an important linkage or bridge between the
overarching objectives and the tasks that are employed to create the effects to accomplish
them. The goal of effects assessment is, therefore, to determine whether the application
of the instruments of national power are making progress toward achievement of the
desired conditions in the operational environment.
b. Upon receipt of indicator data, assessment personnel prepare for the staff-officer
level AC (or similar) review and/or formal assessment board by reviewing the data and
producing MOE summary reports and a draft assessment summary. The draft assessment
summary (see Figure V-3) serves as the baseline for review by the plenary AC, which
V-3
Chappter V
meetss at a perio
odicity specified by thee J-3/J-5 annd is led byy the AC chhief or otheer
desiggnated assesssment lead.
Figure
e V-3. Example Assessm
ment Summa
ary
V-4
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
Task Assessment
a. Following assessment of the status of the desired and undesired effects, the AC
or team verifies the status of tasks assigned to supporting/functional commanders and
agencies to achieve the desired effects. If task status was provided to assessment
personnel prior to the formal AC meeting, discussion of individual tasks may be
conducted by exception.
b. Task assessment typically uses MOPs to evaluate task accomplishment. The
results of tactical tasks are often physical in nature, but also can reflect the impact on
specific functions and systems. Tactical-level assessment may include assessing progress
by phase lines; neutralization of enemy forces; control of key terrain or resources; and
security, relief, or reconstruction tasks. Assessment of results at the tactical level also
helps commanders determine operational and strategic level progress, so JFCs must have
a comprehensive, integrated assessment plan that links assessment activities and
measures at all levels.
c. Combat assessment is an example of task assessment and is a term that can
encompass many tactical-level assessment actions. Combat assessment typically focuses
on determining the results of weapons engagement (with both lethal and nonlethal
capabilities), and thus is an important component of joint fires and the joint targeting
process (see JP 3-60, Joint Targeting).
d. Task assessment intersects effects assessment where there is a failure to achieve,
or a failure to make progress achieving, a desired effect in accordance with the required
timeline(s) designated by the plan. It is at this juncture that task assessment becomes
critical to the effects assessment. Assessment personnel attempt to determine why
sufficient or timely progress is not occurring. In this scenario, the task assessment
becomes a key element of the analysis and an effect-to-task comparison is conducted for
the effects in question to determine if task accomplishment deficiencies are a potential
factor in the non-achievement of the desired effect(s).
5.
Deficiency Analysis
V-5
Chappter V
bb. The firstt effort in deeficiency analysis is the comparisonn of the assessed status of
o
desireed effects to
t the comppletion statuus of the asssociated joiint tasks annd/or actions.
Norm
mally using effect-to-tassk summaryy charts devveloped by assessment personnel in
i
prepaaration for the
t formal assessment
a
b
board,
the AC
A notes diiscrepancies between thhe
statuss of the com
mpleted tasks and the stattus of the asssociated effeect. Figure V-4
V illustratees
an exxample effect-to-task display.
Figure V-4.
V
Examplle Effects-to--Task Summ
mary
(1) Wh
here a discreepancy existss (i.e., a dessired effect is
i assessed as
a RED eveen
a
the efffect are GRE
EEN), the AC
C
thouggh all associated tasks annd actions inntended to attain
must reach conssensus as too whether thhe discrepanncy is due to
t an expeccted time laag
betweeen task/action complettion and a reeflection off change in indicator
i
staatus, or if thhe
tasks//actions are not achievinng the intendded results.
(2) As consensus is reached, effect-to-task/action mismatches
m
a noted foor
are
invesstigation during deficiency analysis and
a follow-oon branch/seequel develoopment and/oor
plan refinement.
r
c Step One MOE
c.
E Indicator Analysis. When the AC detectss a mismatcch
betweeen task/actiion completiion status annd the anticiipated attainnment status for an effecct,
the AC
A begins the deficiencyy analysis proocess by revviewing the associated
a
M
MOE
indicatoor
data. The review
w should enssure that thee reported daata is timely and of suffi
ficient fidelitty
V-6
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
d. Step Two Task Analysis. With the effects status verified, the AC verifies the
status of the tasks and underlying actions against key nodes associated with the effect
under consideration. The AC, working with the supported commander for the task,
verifies that tasks and actions have actually been completed and that sufficient time has
elapsed for changes to be reflected in the indicators. The verified status for effects and
tasks is passed from the AC to the assessment board (if established) and JPG, who must
determine whether the OPLAN/OPORD should continue uninterrupted or whether
additional/alternate actions against key nodes are required. Task analysis steps are
summarized in figure V-6.
V-7
Chapter V
V-8
V--9
Chapter V
Intentionally Blank
V-10
CHAPTER VI
OPERATIONAL IMPLICATIONS
U.S. military power today is unsurpassed on the land and sea and in the air,
space, and cyberspace. The individual Services have evolved capabilities and
competencies to maximize their effectiveness in their respective domains.
Even more important, the ability to integrate these diverse capabilities into a
joint whole that is greater than the sum of the Service parts is an unassailable
American strategic advantage.
Admiral M.G. Mullen
Chairman of the Joint Chiefs of Staff
Capstone Concept for Joint Operations, January 2009
1.
General
Doctrine
VI-1
Chapter VI
Organization
Training
a. This is one of the most important capability development efforts for leaders and
staff elements at every level. The focus of leader development efforts regarding
assessment should remain consistent with the current trend of developing innovative and
adaptive leaders who can respond effectively to a wide variety of circumstances.
b. Pushing responsibility and authority to increasingly lower levels requires trust
and confidence between leaders and subordinates. Particularly important is how leaders
provide commanders guidance and commanders intent, and visualize the operational
VI-2
Operational Implications
environment as they relate to assessment. Developing assessment plans and determining
MOPs and MOEs is both an art and science that the Services must address more directly
and earlier in the development of commissioned and non-commissioned leaders. The
Services are ultimately responsible for developing their senior and junior leaders, but the
following ideas could be helpful from a joint perspective. Specifically:
(1) Pursue greater participation by interagency personnel in professional
military education schools.
(2) Facilitate knowledge sharing and development of adaptability-related skills.
(3) Incorporate assessment requirements in decision-making exercises.
6.
Personnel
VI-3
Chapter VI
Intentionally Blank
VI-4
APPENDIX A
REFRAMING
Annex A
B
A-1
Appendix A
1.
Reframing
Reframing
tend to neglect reflection and reframing following successful actions. To guard against
complacency, the commander and design team practice design during planning and
execution. They must question their current understanding and reframe as the
environment changes and they gain new knowledge.
c. Once the commander decides to reframe some or all of the operational
environment, the problem set, or the operational approach, the commander issues updated
planning guidance to the staff. The staff then conducts planning. This planning may be
abbreviated or extensive. The outcome of planning may cause the commander to modify
or abandon the current operational approach by issuing a FRAGORD or new OPORD.
The commander may also decide to hold the results of planning in abeyance to allow
more time for the environment to react to the current plan and operational approach.
Additional assistance with reframing may be found in the Deputy Director J-7 Joint and
Coalition Warfighting Commanders Handbook for Operational Design.
2.
A-3
Appendix A
during reframing provides the freedom to operate beyond the limits of any single
perspective. In a complex system, conditions will change because forces and actors will
continuously act on the system. Recognizing and anticipating these changes is
fundamental to design and essential to an organizations ability to learn and adapt.
A-4
ANNEX A TO APPENDIX A
FAILURE TO REFRAME THE PROBLEM:
The Beirut Intervention to the Marine Barracks Bombing, 1983
The Israeli invasion of southern Lebanon commenced on 6 June 1982 and was
designed to remove the threat of the Palestinian Liberation Organization (PLO) operating
in the area, pushing them 40 kilometers north of the Israeli-Lebanese border. The U.S.,
French, and Italians responded by sending in the Multi-National Force (MNF) of
peacekeepers into Beirut in August and September, evacuating over 14,000 PLO
combatants out of the country to Tunisia, Yemen, Jordan, and Syria. On 8 September,
the newly-elected, Israeli-supported Christian Phalangist President of Lebanon was
assassinated by a member of the Syrian Social Nationalist Party. The next day the
Israelis moved into West Beirut. Despite assurances by the Americans to the PLO
leadership that refugees there would be safeguarded, Israeli-backed Lebanese Christian
Phalangists massacred over 800 civilians at the Sabra and Shatila refugee camps on 16
September. Not only was this a human tragedy, it was a profound embarrassment for the
United States.
The MNF divided Beirut into three zonesthe French had the northern part of the
city, the Italians got the middle, and the U.S. the southern zone, which included the large
Beirut International Airport (BIA) undergoing construction improvements. 32 Marine
Amphibious Unit (MAU) received its orders to establish a presence in the US. MNF
Zone, intermingled with civilians operating and upgrading the BIA. The idea was for the
Marines to allow as much normalcy as possible in their peacekeeping role. The biggest
problem facing the Marines was what the Israelis were doing in their assigned zone.
While the operating guidance was to ignore them lest it appear that the U.S. was taking
their side, from a practical standpoint this was difficult to execute. In October 1982, 32
MAU was relieved in place by 24 MAU.
The mission directive for 24 MAU stated: establish [an] environment which will
permit the Lebanese Armed Forces [LAF] to carry out its responsibilities in the Beirut
area, and [to] be prepared to protect U.S. forces and conduct retrograde and withdrawal
operations from the area. The international MNF commanders agreed on patrolling into
Christian East Beirut to create the impression among the Muslims that the force was
indeed impartial and not allied to the Israeli and Christian Phalangist side. The MAU
commander decided to do more to permit the LAF to carry out its responsibilities by
putting his idle Marines to work training them.1
24 MAU was relieved in place by 22 MAU. On 18 April, 1983, an explosive-laden
pickup truck detonated within the American Embassy compound in Beirut, killing over
60 people. The Iranian-backed Hezbollah claimed credit for the attack. The MAU
commander was shot at in his helicopter on 5 May. The next day, Druse artillery shot at
the USS FAIRFAX COUNTY at sea performing logistical support, and two rounds hit
the Marine beach. Nobody was hurt.2
A-A-1
Annex A to Appendix A
On 17 May, President Ronald Reagan announced on television that, The MNF
wentto help the new government of Lebanon maintain order until it can organize its
military and its police to assume control of its borders and its own internal security. To
many, this meant the MNFand particularly the American componentwas no longer
impartial peacekeepers. They were on the side of the Christian Phalangist government.
In this atmosphere, 24 MAU would relieve 22 MAU later that month. While there were
disturbing signs of growing anti-Americanism in Muslim neighborhoods, it wasnt until
the Israelis pulled out of much of Beirutparticularly the Shouf Massif hills--on 28
August that the situation began to dramatically change. Local Muslim militia warlords,
long suppressed by the Israeli Defense Force, had free rein to take on the hated LAF.3
It was then that the Marines and LAF within the BIA and outlying checkpoints began
to take sporadic fire over the next two dayssmall arms at first, but then mortar fire.
The 22 MAU commander, Colonel Tim Geraghty, authorized illumination fired by
Marine artillery over suspected Muslim Druze militia firing positions on the Shouf, but
when indirect fire continued against BIA, he ordered high explosive rounds against them.
On 4 September, Druze rockets, artillery, and mortar shells began raining into BIA and a
company of Marines collocated with an LAF armored force were taken under heavy fire.
No supporting arms were made available and the parent Marine battalion was restricted
in providing any by the ROE and the need not to take sides.4
The MAU Commander was put into a dilemma. His guidance to maintain a neutral
posture, treating all parties equally, didnt square with the Presidential television
statement to support the Lebanese government and help the LAF. The restrictions on the
force employed for self-defense only meant that the factions manipulated the US MNF
contingent to make it appear weak, indecisive, and irrelevant.
On 9 September, an Lebanese general requested U.S. support through State
Department channels for an LAF unit fighting in the Shouf town of Suq-al-Gharb.
Colonel Geraghty initially refused to assist as he thought it would compromise whatever
shred of neutrality the MNF was trying to maintain. The mission also did not conform to
guidance given to him through Defense Department channels. He was also acutely aware
of the 600 medium and heavy artillery tubes the Druze had amassed on the Shouf that
could hit his Marines at BIA. In his 10 September situation report to U.S. SIXTH
FLEET, Geraghty surmised:
The worsening military and political situation in Lebanon this week has pulled
the MAU deeper and deeper in to more frequent and direct military action. Our
increasing number of casualties has removed any semblance of neutrality and has
put us into direct retaliation against those who have fired on us.I am
concernedthat the end does not appear to be in sight and I perceive that the
involvement in the Lebanese internal struggle has exceeded our original
mandate.5
The MAU Commander resisted a great deal of pressure from President Reagans
Special Ambassador and other senior leaders. Despite this, on 19 September the militias
A-A-2
A-A-3
Annex A to Appendix A
Eric Hammel, The Root: The Marines in Beirut, August 1983-February 1984 (New York: Harcourt Brace
Jovanovich, Publishers, 1985) 52-59.
2
Ibid, 76-81.
3
Ibid, 82-117.
4
Ibid, 146-195.
5
Colonel Timothy J. Geraghty, USMC (Ret.), Peacekeepers At War: Beirut 1983The Marine
Commander Tells His Story (Dulles, VA: Potomac Books, 2009), 68.
6
Hammel, 211-224.
7
Geraghty, 77.
8
Ibid, 88.
9
Hammel, 303.
10
Michael Petit, Peacekeepers At War: A Marines Account of the Beirut Catastrophe (Boston: Faber and
Faber, Publishers, 1986), 201.
11
Hammel, 421-426.
A-A-4
Reframing:
The South East Asia Lake, Ocean, River, Delta Strategy, 1968-69
ANNEX B TO APPENDIX A
REFRAMING:
The South East Asia Lake, Ocean, River, Delta Strategy, 1968-69
When the U.S. committed to the Vietnam War in force, the U.S. Navy was soon
employed in a coastal interdiction campaign (Task Force 115, OPERATION MARKET
TIME) to prevent the movement of communist supplies into and around the Republic of
Vietnam (RVNSouth Vietnam). Before long, the Navy cast its eyes into the Mekong
Delta where the Viet Cong had established a stronghold of support. According to some,
75% of the population there was under the influence of the communists. A more tangible
indicator of Viet Cong success, the diversion of the rice harvest, showed that in 1965
through 1966, the output of rice from the Delta had fallen by about 25%. 30,000 regular
troops and 50,000 guerrillas were estimated to be operating in the Delta, and the threedivision RVN army force along with the Regional Forces and Popular Forces were
unable to stop them.1
On 18 December, 1965, the Navy created Task Force 116 to conduct OPERATION
GAME WARDEN to patrol the inland Delta waterways and deny the communists
waterborne supply routes. River Patrol Boats (PBRs) had to be procured and crews
trained; it wasnt until 8 May, 1966, that the first patrols were mounted.2 At first the
enemy eluded the few boats that were used. But as the Task Force grew and patrols
became more frequent, the Viet Cong employed close-range ambushes instead. PBR
crews learned quickly that the best way to beat these tactics were to pre-empt them by
mounting their own ambushes first. A Huey helicopter squadron was formed in April
1967 but then had too few aircraft to respond to all requests for assistance across the
breadth of the Delta by 1968.
Sea, Air, Land (SEAL) naval Special Forces were assigned to reinforce GAME
WARDEN in improving intelligence on communist activities beginning in February
1966. But this was not going to be enough, so the U.S.--despite the reluctance of the
Saigon government--created the Mobile Riverine Force (MRFTask Force 117),
bringing in a brigade of the U.S. 9th Infantry Division in 1967. This unit would operate
from the water in armored landing craft and have specialized fire support afloat, to
include assigned artillery battalions.3 And while RVN forces were involved in MRF
operations, the U.S. was clearly in the lead conducting riverine search and destroy
operations.
When it came, the 1968 Tet offensive was defeated in the Delta as it was all over
South Vietnam. Despite favorable metrics on communist infiltration and supply
detections and successful boardings, the Viet Cong were still in control of major portions
of the Delta and still could mount waterborne transfers of troops and equipment. To
Navy CAPT Robert S. Salzer, commander of the Riverine Assault Force, methods
employed so far just werent working and could never work. A new understanding of the
problem and a new approach was necessary. He summed it up this way:
A-B-1
Annex B to Appendix A
In an oriental country against an irregular force, what our tactic had to be was to
force the enemy to come to us, because he had the knowledge of the terrain, the
knowledge of the people. He had many advantages and we were relatively
clumsy at ferreting him out. How then, I kept postulating, do you make the
enemy come to you? The answer is you must, when he is an enemy depending on
an external source of supplies, choke off the supplies. And you must have many
small units engage in that activity; also, you must keep rapid reaction forces
poised and ready for the enemy main force comes in and tries to tangle with the
little guys.
Where were his supplies coming from? At one time it was claimed they were
coming from the sea, but that turned out wrong. Others were claiming, and
intelligence people said they had hard evidence, that they were coming into
Cambodia and down the trail and then down through the Ca Mau Peninsula
around certain canal networks, and our intelligence people said they had them
pretty well identified. What I wanted to do was set up multiple, integrated
interdiction barriers with small river units, with troops associated with them,
setting up ambush patrols along these areas. The Viet Cong were pretty well
canalized for a variety of reasons as to their routes and I figured we might have a
20 percent probability at any one barrier. Therefore, we had to set a pack of
barriers. We needed multiple layers of interdiction patrol, such as in blue water
ASW [Anti-Submarine Warfare].4
The new three-star naval commander in Vietnam, VADM Elmo Zumwalt, wasat
47the youngest commander of his rank in the Navy. And he was eager for innovative
ideas and listened to Salzer. Within weeks of his arrival in-country, Zumwalt pitched
Salzers ideas up the chain of command in the autumn of 1968. The new strategy was far
different than what had been done before. SEALORDS had three goals: (1) choke off
communist infiltration and supply routes into the Delta; (2) exercise continuous control
over the cross-Delta waterways and canals; and (3) get into the communist stronghold of
the Ca Mau peninsula. The means would be a joint and coalition effort. The ways would
be using water and air mobility to establish control over the land area of the Mekong
Delta. Unlike anywhere else in Vietnam, a front line would be behind it, prosperity was
growing.5 As Salzer had predicted, the communistsfaced with losing supplies and
infiltration routes--indeed came to us. More communist personnel, documents, and
equipment were captured. The number of enemy-initiated firefights significantly
increased. Casualties among the SEALORDS forces also rose dramatically, but so did
the enemys; estimates averaged about 30 communists lost for every friendly,
occasionally reaching over a 100:1 ratio.6 Salzer explained the reason for the success of
SEALORDS this way:
the VC were set on avoiding contact; and that was a fairly easy task against
search and destroy tactics with multi-battalion units complete with artillery
support plowing through the paddy. It appeared to us that the best chance of
bringing the enemy into the open was to imperil his primary objective of resupply
and reinforcement by multiple interdiction barriers athwart his lines of
A-B-2
Reframing:
The South East Asia Lake, Ocean, River, Delta Strategy, 1968-69
communications to the Delta. No single interdiction barrier had much chance of
imposing significant attrition in view of the availability of alternate rivers and
streams. But a series of such barriers maintained by combined river, ground, and
air forces might have brought the VC to the point where they had to use sizable
units to break through. Then with ready-reaction (air-mobile battalions) the
enemy could be engaged on our termsa bait and destroy tactic.7
Salzer was promoted to flag rank and eventually served as the senior naval
commander in Vietnam in 1971, retiring as a VADM from his last tour as Commander,
Amphibious Forces Pacific. Zumwalt would be selected from his Vietnam tour to be the
Chief of Naval Operations, even though he had never held a numbered fleet command,
until then considered to be a prerequisite to hold the office.
1
2
3
4
5
6
7
John Forbes and Robert Williams, Riverine Force: The Illustrated History of The Vietnam War, Volume 8
(New York: Bantam Books), 51.
Ibid, 66.
Ibid, 83.
CDR R. L. Schreadly, USN (Ret.), From the Rivers to the Sea (Annapolis, MD: Naval Institute Press,
1992), 149.
Forbes and Williams, 153-154.
Thomas J. Cutler, Brown Water, Black Berets (Annapolis, MD: Naval Institute Press, 1988), 334.
Ibid, 336
A-B-3
Annex B to Appendix A
Intentionally Blank
A-B-4
APPENDIX B
JOINT TASK FORCE ASSESSMENT CELL
COMPOSITION AND RESPONSIBILITIES
1.
General
The procedures provided in this appendix provide a starting point designed to refresh
previous instruction or experience, or to serve as a planning template when compelled by
mission requirements. Although this appendix refers to the assessment cell, the
discussion can be applied to assessment teams, assessment working groups, or other
similar organizations as appropriate.
2.
Purpose
Inputs:
(1) During JTF planning - approved commanders objectives from the joint
planning group (JPG).
(2) During JTF execution - assessment analysis and conclusions.
b.
Purpose:
d.
rhythm.
(2) During execution, assessment team will synchronize with assessment cell
and JPG battle rhythms.
e.
B-1
Appendix B
f.
Membership:
(1) Designated JTF planner or assessor (assessment team lead)
(2) JTF J-2/intelligence planner
(3) JTF collection manager
(4 JTF J-5/J-3 planner
(5) JTF civil affairs planner
(6) JTF information operations planner
(7) JTF political/military affairs planner
(8) JTF medical affairs planner
(9) JTF cultural advisor
(10) JTF logistics planner
(11) JTF SJA representative
(12) CCDR J-code representatives
(13) Component planners
(14) Interagency partner representatives
(15) Multinational partner representatives
(16) JIPOE analysts (if appropriate)
g.
AC Responsibilities:
(1) Manage assessment team battle rhythm.
(2) Develop desired effects in support of JFCs objectives.
(3) Identify potential undesired effects.
(4) Develop MOEs for desired effects.
(5) Develop MOEs for undesired effects.
B-2
Input:
f.
Membership:
B-3
Appendix B
(1) Designated JTF assessment lead
(2) JTF J-2 representative
(3) Joint intelligence support element (JISE) representative (if appropriate)
(4) JTF collection manager
(5) JTF J-3/J-35 representative
(6) JTF civil affairs officer
(7) JTF information operations officer
(8) JTF political/military affairs officer
(9) JTF medical affairs officer
(10) JTF SJA representative
(11) JTF cultural advisor
(12) JTF logistics representative
(13) Interagency partner representatives
(14) Multinational partner representatives
(15) CCDR J-code representatives
(16) Component representatives
(17) JIPOE analysts (if appropriate)
g.
B-4
B-5
Appendix B
Intentionally Blank
B-6
APP
PENDIX C
ASSESSM
MENT DEVE
ELOPMEN
NT DURING
G THE
JOINT OPERATIO
O
ON PLANN
NING PROC
CESS
C--1
Appendix C
Intentionally Blank
C-2
APPENDIX D
INTERAGENCY CONFLICT ASSESSMENT OVERVIEW
Editors Note: The primary source of information in this appendix is JP 3-08,
Interorganizational Coordination During Joint Operations. Minor changes were made to
conform to joint doctrine and formatting requirements.
1.
Overview
a. Addressing the causes and consequences of weak and failed states has become
an urgent priority for the USG. Conflict both contributes to and results from state
fragility. To effectively prevent or resolve violent conflict, the USG needs tools and
approaches that enable coordination of US diplomatic, development, and military efforts
in support of local institutions and organizations/individuals seeking to resolve their
disputes peacefully.
b. A first step toward a more effective and coordinated response to help states
prevent, mitigate, and recover from violent conflict is the development of shared
understanding among USG agencies about the sources of violent conflict or civil strife.
Achieving this shared understanding of the dynamics of a particular crisis requires both a
joint interagency process for conducting the assessment and a common conceptual
framework to guide the collection and analysis of information. The ICAF is a tool that
enables an interagency team to assess conflict situations systematically and
collaboratively.
It supports USG interagency planning for conflict prevention,
mitigation, and stabilization.
2.
Purpose
a. Using the ICAF can facilitate a shared understanding across relevant USG
departments and agencies of the dynamics driving and mitigating violent conflict within a
country that informs US policy and planning decisions. (Note: agencies will be used in
this appendix in place of departments and agencies.) It may also include steps to establish
a strategic baseline against which USG engagement can be evaluated. It is available for
use by any USG agency to supplement interagency planning.
b. The ICAF draws on existing methodologies for assessing conflict currently in
use by various USG agencies as well as IGOs and NGOs. It is not intended to duplicate
existing independent analytical processes, such as those conducted within the IC. Rather,
it builds upon those and other analytical efforts to provide a common framework through
which USG agencies can leverage and share the knowledge from their own assessments
to establish a common interagency perspective.
c. The ICAF is distinct from early warning and other forecasting tools that identify
countries at risk of instability or collapse and describe conditions that lead to outbreaks of
instability or violent conflict. The ICAF builds upon their results by assisting an
interagency team to understand why such conditions may exist and how to best engage to
D-1
Appendix D
transform them. The ICAF draws on social science expertise to lay out a process by
which an interagency team will identify societal and situational dynamics known to
increase or decrease the likelihood of violent conflict. In addition, the ICAF provides a
shared, strategic snapshot of the conflict against which future progress can be measured.
3.
a. An ICAF should be part of the first step in any interagency planning process. It
can help to inform the establishment of USG goals, design or reshape activities,
implement or revise programs, or reallocate resources. The interagency planning process
within which an ICAF is performed determines who initiates and participates in an ICAF,
the time and place for conducting an ICAF, the type of product needed and how the
product will be used, and the level of classification required.
b. Whenever the ICAF is used, all of its analytical steps should be completed.
However, the nature and scope of the information collected and assessed may be
constrained by time, security classification, or access to the field.
c.
The ICAF is a flexible, scalable interagency tool suitable for use in:
(1) Engagement and conflict prevention planning.
(2) USG R&S contingency planning.
(3) USG R&S crisis response planning.
D-2
a. The process within which an ICAF is used determines which agencies and
individuals should serve on the team and in what capacities they should serve. For
example, an established country team may use the ICAF to inform country assistance
strategy development, or USAID and S/CRS may co-lead an interagency team to assist in
developing a NDAA Section 1207 request. In whole-of-government crisis response
under the IMS for R&S, an ICAF normally will be part of the strategic planning process
led by the CRSG Secretariat. The ICAF might also be used with a key bilateral partner as
part of collaborative planning. The agency/individual responsible for managing the
overall planning process is responsible for proposing the ICAF and requesting necessary
agency participation.
b. Participants in an ICAF assessment should include the broadest possible
representation of USG agencies with expertise and/or interest in a given situation. An
ideal interagency field team would represent diverse skill sets and bring together the
collective knowledge of USG agencies. Participants would at a minimum include
relevant: regional bureaus, sectoral experts, intelligence analysts, and social science or
conflict specialists. When used as part of the planning processes outlined in Principles of
the USG Planning Framework, the team will normally include members of the strategic
planning team. This team could be expanded as needed to include local stakeholders and
international partner representatives.
c. Members of the interagency team are responsible for providing all relevant
information held by their respective agencies to the team for inclusion in the analysis,
including past assessments and related analyses. These representatives should also be
able to reach back to their agencies to seek further information to fill critical information
gaps identified by the ICAF.
D-3
Appeendix D
5.
T Elemen
The
nts of the Intteragency Conflict
C
Asssessment Framework
a The ICA
a.
AF can be ussed by the full
fu range of USG agencies at any pllanning leveel.
Condducting an IC
CAF might be an iterative process with initial results builtt upon as thhe
USG engagemen
nt expands. For example, an ICAF done in Waashington at the start of a
crisiss might be enhanced lateer by a moree in-depth examination
e
in-country. The level of
o
detaill into which the ICAF
F goes willl depend uppon the connflict and tyype of USG
G
engaggement.
b The two
b.
o major compponents of the
t ICAF aree the conflicct diagnosis and
a the seguue
into planning.
p
6.
C
Conflict
Dia
agnosis
a Using th
a.
he conceptuaal frameworrk for diagnoosing a confflict (see Figgure D-1), thhe
interaagency team
m will deliverr a product that
t
describees the conteext; core griievances an
nd
sociaal/institution
nal resiliencce; drivers//mitigators of conflict;; and oppoortunities foor
increeasing or deecreasing conflict.
Fig
gure D-1. Co
onceptual Frramework forr Diagnosing
g a Conflict
D-4
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
(1) Context. The team should evaluate and outline key contextual issues of the
conflict environment. Context does not cause conflict but describes often long-standing
conditions resistant to change. Context may create preconditions for conflict by
reinforcing fault lines between communities or contribute to pressures making violence
appear as a more attractive means for advancing ones interests. Context can shape
perceptions of identity groups and be used to manipulate and mobilize constituencies.
Context may include environmental conditions, poverty, recent history of conflict, youth
bulge, or conflict-ridden region.
(2) Core Grievances and Sources of Social/Institutional Resilience. The
team should understand, agree upon, and communicate the concepts of core grievance
and sources of social/institutional resilience and describe them within the specific
situation being assessed.
(a) Core Grievance. The perception, by various groups in a society, that
their needs for physical security, livelihood, interests, or values are threatened by one or
more other groups and/or social institutions.
(b) Sources of Social/Institutional Resilience. The perception, by various
groups in a society, that social relationships, structures, or processes are in place and able
to provide dispute resolution and meet basic needs through nonviolent means.
(3) Drivers of Conflict and Mitigating Factors. The team should understand
and outline drivers of conflict and mitigating factors, and enumerate those identified
within the specific situation being assessed.
(a) Drivers of conflict refers to the dynamic situation resulting from the
mobilization of social groups around core grievances. Core grievances can be understood
as the potential energy of conflict. Key individuals translate that potential energy into
active drivers of conflict.
(b) Mitigating factors describe the dynamic situation resulting from the
mobilization of social groups around sources of social/institutional resilience. Mitigating
factors can be understood as the actions produced when key individuals mobilize the
potential energy of social and institutional resilience.
(4) Windows of Vulnerability and Windows of Opportunity. The team
should specify opportunities for increasing and decreasing conflict as defined here and
describe those expected in the near-term, and where possible, in the longer-term.
(a) Windows of vulnerability are moments when events threaten to rapidly
and fundamentally change the balance of political or economic power. Elections,
devolution of power, and legislative changes are examples of possible windows of
vulnerability. Key individuals/organizations may seize on these moments to magnify the
drivers of conflict.
D-5
Appendix D
D-6
D-7
Appendix D
(b) Determines key individuals:
1.
2.
including:
a.
e.
Mass support.
D-8
(1) Specify current USG activities (listing USG agencies present in the country
and the nature and scope of their efforts).
(a) Identify the impact of these efforts on drivers of conflict and mitigating
factors.
(b) Identify efforts that target similar outcomes and coordination
mechanisms in place.
(2) Specify current efforts of non-USG participants, including bilateral
agencies, multi-lateral agencies, NGOs, the private sector, and local entities.
(a) Identify the impact of the efforts on the drivers of conflict and
mitigating factors.
D-9
Appendix D
(b) Identify efforts that target similar outcomes (including USG efforts)
and coordinating mechanisms in place.
(3) Identify drivers of conflict and mitigating factors not sufficiently addressed
by existing efforts (i.e., gaps).
(4) Specify challenges to addressing the gaps.
(5) Referring to windows of vulnerability, describe risks associated with failure
to address the gaps.
(6) Referring to windows of opportunity, describe opportunities to address the
gaps.
d. The ICAT draws on the information generated in segue into planning to
determine potential entry points for USG efforts. The description of these entry points
should explain how the dynamics outlined in the ICAF diagnosis may be susceptible to
outside influence.
D-10
APPENDIX E
TACTICAL CONFLICT ASSESSMENT AND PLANNING FRAMEWORK
NOTE: The following information is excerpted from FM 3-07, Stability Operations.
Except for formatting, the information presented is identical to FM 3-07.
1.
General
Conceptual Framework
a.
(1) Instability results when the factors fostering instability overwhelm the
ability of the host nation to mitigate these factors.
(2) Assessment is necessary for targeted and strategic engagement.
(3) The population is the best source for identifying the causes of instability.
(4) Measures of effectiveness are the only true measure of success.
b. Instability. Instability results when the factors fostering instability overwhelm
the ability of the host nation to mitigate these factors. To understand why there is
instability or determine the risk of instability, the following factors must be understood:
grievances, key actors motivations and means, and windows of vulnerability.
(1) Grievances are factors that can foster instability. They are based on a
groups perception that other groups or institutions are threatening its interests.
Examples include ethnic or religious tensions, political repression, population pressures,
and competition over natural resources. Greed can also foster instability. Some groups
and individuals gain power and wealth from instability. Drug lords and insurgents fall in
this category.
(2) Key actors motivations and means are ways key actors transform
grievances into widespread instability. Although there can be many grievances, they do
not foster instability unless key actors with both the motivation and the means to translate
these grievances into widespread instability emerge. Transforming grievances into
E-1
Appendix E
widespread violence requires a dedicated leadership, organizational capacity, money, and
weapons. If a group lacks these resources, it will not be able to foster widespread
instability. Means and motivations are the critical variables that determine whether
grievances become causes of instability.
(3) Windows of vulnerability are situations that can trigger widespread
instability. Even when grievances and means are present, widespread instability is
unlikely unless a window of vulnerability exists that links grievances to means and
motivations. Potential windows of vulnerability include an invasion, highly contested
elections, natural disasters, the death of a key leader, and economic shocks.
(4) Even if grievances, means, and vulnerabilities exist, instability is not
inevitable. For each of these factors, there are parallel mitigating forces: resiliencies, key
actors motivations and means, and windows of opportunity.
(5) Resiliencies are the processes, relationships, and institutions that can reduce
the effects of grievances. Examples include community organizations, and accessible,
legitimate judicial structures. Key actors motivations and means are ways key actors
leverage resiliencies to counter instability. Just as certain key actors have the motivation
and means to create instability, other actors have the motivation and the means to rally
people around nonviolent procedures to address grievances. An example could be a local
imam advocating peaceful coexistence among opposing tribes. Windows of opportunity
are situations or events that can strengthen resiliencies. For example, the tsunami that
devastated the instable Indonesian province of Aceh provided an opportunity for rebels
and government forces to work together peacefully. This led to a peace agreement and
increased stability.
(6) While understanding these factors is crucial to understanding stability, they
do not exist in a vacuum. Therefore, their presence or absence must be understood within
the context of a given environment. Context refers to longstanding conditions that do not
change easily or quickly. Examples include geography, demography, natural resources,
history, as well as regional and international factors. Contextual factors do not
necessarily cause instability, but they can contribute to the grievances or provide the
means that foster instability. For example, although poverty alone does not foster
conflict, poverty linked to illegitimate government institutions, a growing gap between
rich and poor, and access to a global arms market can combine to foster instability.
Instability occurs when the causes of instability overwhelm societal or governmental
ability to mitigate it.
c.
Assessment
E-2
E-3
Appendix E
While outputs indicate task performance, effects measure the effectiveness of
activities against a predetermined objective. Measures of effectiveness are crucial for
determining the success or failure of stability tasks. (See Chapter III, Assessment
Components, for a detailed discussion of the relationship between among assessment,
measures of performance, and measures of effectiveness.)
3.
E-4
E-5
Appeendix E
d
d.
Design
E-6
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
e.
Evaluation
E-7
Appendix E
(1) Activities and projects are products that foster a process to change behavior
or perceptions. Indicators and measures of effectiveness identify whether change has
occurred or is occurring.
(2) Perceptions of the local populace provide the best means to gauge the
impact of program activities.
(3) Indicators provide insight into measures of effectiveness by revealing
whether positive progress is being achieved by program activities.
(4) Good deeds cannot substitute for effectively targeted program activities;
the best information engagement effort is successful programming that meets the needs of
the local populace.
(5) Intervention activities should:
(a) Respond to priority issues of the local populace.
(b) Focus effort on critical crosscutting activities.
(c) Establish anticorruption measures early in the stability operation.
(d) Identify and support key actors early to set the conditions for
subsequent collaboration.
(6) Intervention activities should not:
(a) Mistake good deeds for effective action.
(b) Initiate projects not designed as program activities.
(c) Attempt to impose Western standards.
(d) Focus on quantity over quality.
4.
Summary
The TCAPF has been successfully implemented in practice to identify, prioritize, and
target the causes of instability in a measurable and immediately accessible manner. Since
it maximizes the use of assets in the field and gauges the effectiveness of activities in
time and space, it is an important tool for conducting successful stability operations.
E-8
APPENDIX F
SELECTED EXCERPTS FROM THE NATO
OPERATIONS ASSESSMENT HANDBOOK
NOTE: The following are selected excerpts from the proposed North Atlantic Treaty Organization (NATO)
Operations Assessment Handbook. Similar to this handbook, the NATO handbook carries the following
disclaimer: This handbook covers the fundamentals of Operations Assessment in NATO, and must be
viewed as a pre-doctrinal document for informing commanders and staff officers on the current
understanding of Operations Assessment principles, procedures and techniques. Readers are encouraged
to read the entire NATO handbook in order to understand the assessment construct and terms they are
likely to encounter in NATO-led operations. Except for paragraph and figure numbering, the text and
formatting from the original document is retained as much as possible.
Introduction
Operations Assessment1
In this handbook, the term Operations Assessment is to be understood as the
function that enables the measurement of progress and results of operations
in a military context, and the subsequent development of conclusions and
recommendations that support decision making.
Background
Operations, whether military-led, conducted by coalitions or alliances such as NATO; or
civilian-led, such as disaster relief conducted by charitable or international organisations,
or other entities, take place in dynamic environments where changes in the political,
economic, social, military, infrastructure and information domains are constantly
happening. All organisations involved need to have a feedback process in order to
determine the effectiveness of their operations and make recommendations for changes;
NATO is no exception.
In NATO, this feedback process is called Operations Assessment 1 and is critical to
inform on progress being made in creating desired effects and achievement of objectives,
which in turn allows for adjustments to be made to the plan, and for the decision making
of military and political leadership to be informed. Operations Assessment provides an
important input in the knowledge development process, which builds up and maintains a
holistic understanding of the situation and operating environment.
Operations
Assessment can only provide indications of trends in a systems behaviour given certain
actions. Thus, success in operations still heavily relies on a commanders intuition,
experience and judgement.
Important Note: In late 2010, the decision was made to change the formal name of this function from
Assessment to Operations Assessment in order to avoid confusion with other existing uses of assessment
in NATO. This handbook uses both terms interchangeably; however, as within the context of this document
no confusion should arise.
F-1
Appendix F
Military Operations
Military Operations are conducted using four major interdependent functions:
Knowledge Development, Planning, Execution, and Operations Assessment.
Knowledge Development (KD)2
KD is a continuous, adaptive and networked activity carried out at all levels of command
to provide commanders and staffs with a shared, comprehensive understanding of
complex situations, including the interrelationship of different political, military,
economic, social, infrastructure, and information (PMESII) domains. It enables the
commander and staff to better understand the possible effects of military, political,
economic and civil actions on different systems and actors within the engagement space.
It supports decision making throughout the different phases of an operation. The KD
process provides a shared knowledge base of operationally-relevant material.
KD addresses the critical requirement to develop a greater understanding of complex
problems by exploiting information and knowledge from a wide range of sources. This
process helps to determine the most appropriate responses and enables the effective use
of military and non-military means. In order to develop improved understanding of such
complex problems, KD includes a systems approach to analysis to compliment other
methods of analysis. A systems approach to analysis contributes to a more holistic view
of the engagement space as well as the operational environment and supplements other,
more traditional analysis techniques. It focuses on collecting and analysing information
about the various systems and actors related to the problem, as well as the
interrelationship of their different sub-systems and system elements in order to develop
the knowledge required to support decisions regarding the most appropriate response.
KD is critical during planning of operations, but has a strong link to operational
execution and Operations Assessment. Initial development of the Operations Assessment
process will be dependent upon the systems analysis and contents of the knowledge base
produced by the KD process, in addition to other sources. The products produced from
the Operations Assessment process will add to the understanding of the operational
environment and this information will be fed back into the knowledge base. KD and
Operations Assessment processes will be interdependent by the virtue of their common
linkages to the knowledge base.
Planning
Planning is a logical sequence of cognitive processes and associated procedures
undertaken by commanders and staffs to analyse a situation, deduce mission requirements
and determine the best method for accomplishing tasks in order to achieve desired
military objectives and ultimately, in the case of NATO, the end-state. Based on the
knowledge of centres of gravity and key system vulnerabilities gained through analysis of
2
Description adapted from the Bi-SC KD Concept (12 Aug 2008) and the BiSC KD Pre-Doctrinal Handbook (v0.79, 25 Feb 2010).
F-2
F-3
Appendix F
being used effectively, progress is being made and when the end-state is likely to be or
is actually achieved.
Operations Assessment is closely linked with the KD function, which is responsible for
determining the initial system state during planning and maintaining the ongoing
knowledge of the engagement space during execution. Operations Assessment uses the
information and knowledge from KD to design the metrics of measurement, and KD uses
the results from Operations Assessment to update systems analysis and input in the
knowledge base.
Operations Assessment can be applied to specific operations, events or topics either
within or outside the military plan. A broader application of Operations Assessment
considers an overall military campaign. Operations Assessment may consider a range of
timescales from short-term changes to long-term change over years. There are many
ways in which the responsibility for the level and timescale of Operations Assessment
can be divided, depending on the particular context, the level of command and the needs
of the Commander.
At any level and any timescale, in general, there are two types of Operations Assessment
that will be undertaken typically during an operation: historic and predictive 3 .
Historic Assessment during an operation provides the Commander with an evaluation
of completion of actions, and progress toward the desired Effects and achievements of
Decisive Points, Objective(s), and ultimately the End-State. This Assessment utilises
historical data to identify trends up to and including the current state. Predictive
assessment builds on the historic assessment and helps extrapolate current trends to the
future, thus identifying potential opportunities and risks for the Commander. In addition
to past events, predictive assessment is based on known future events, plans, intentions,
actions and assumptions to develop a forecast of the future situation.
In some circumstances, Operations Assessment may track the activities of other actors,
such as IOs/NGOs [international organizations/nongovernmental organizations], and
data produced regularly by these organisations. Whether intentional or not, the activities
of non-military organisations may create effects in the military domain and vice-versa.
Where necessary, Operations Assessment must attempt to identify the status of these
effects.
The Operations Assessment Process
The Operations Assessment process involves four major steps:
a. Designing the assessment and support to planning;
b. Developing the data collection plan;
3
Predictive implies that Assessments make useful estimates of trends for the purposes of planning, based on previous history,
current intentions and a number of assumptions. A legitimate question that may illustrate the need for predictive assessment is: based
on our current rate of equipping and training nations Xs forces, what will be their strength in 1 year?
F-4
F-5
Appendix F
Definitions
Operations Assessment at the operational level, more often called the Joint level in
NATO can be divided into two areas: Campaign Assessment and Operational
Assessment.
Campaign Assessment
Campaign Assessment is the continuous monitoring and assessment of all effects and
objectives specified in the operational level military plan (campaign). Furthermore, the
assessment of desired and undesired effects across all the PMESII domains will be
considered, where they impact significantly on the operational level military plan, or
where they are explicitly stated in the military plan. It seeks to answer the question: Are
Paragraph adapted from: Williams & Morris (2009). The development of theory-driven evaluation in the military: Theory on the
front-line. American Journal of Evaluation, 30(1), 62-79.
F-6
It may be that the operational plan has to contain effects in the economic, political or social domains, in the local or regional context,
that are outside of the military mission. The strategic level will retain the theatre-wide / international assessment of P_SEII domains.
F-7
Appendix F
b. Work with the Joint Operations Planning Group (JOPG) during development and
revision of the OPLAN
c. Consider the tactical level assessments received from their subordinate commands
and other areas of NATO
d. Produce the operational level assessments on ongoing military operations
considering the tactical level assessments
e. Contribute to Strategic Assessments as required
Tactical Level
At the Tactical level, the Commander owns the tactical level Assessment. The
Assessment Staff take responsibility for development of the assessment annexes in the
OPLAN if required, and the conduct of assessments during execution. At the tactical
level, assessment staff have the following specific responsibilities:
a. Act as the focal point for Assessment development in their respective HQ,
including the contribution to doctrine development
b. Work with the Operations Planning Group (OPG) during development and
revision of the OPLAN
c. Consider the tactical level assessments received from their subordinate commands
and other areas of NATO
d. Produce the tactical level assessments on ongoing military operations considering
the assessments of their subordinate commands
e. Contribute to Operational Level Assessments as required
Assessment Process at the Operational and Tactical Level
It is essential that Assessment personnel are involved in the early stages of the decision
cycle of Plan, Execute, Monitor, and Assess. The early intervention of Assessment
personnel in the decision cycle ensures that the plan is measureable from the very
beginning.
Members of the Joint Assessment Branch are an integral part of the JOPG and support
the planning in the different syndicates. The syndicate developing the Operational Design
must contain JAB expertise. The Operational Design is the key reference document for
the plan and assessment process. The Operational Design consists of operational
objectives nested within the strategic objectives, related operational effects and decisive
points. The operational objectives, effects and decisive points form the basis for the
development of the Assessment Annex.
In order to achieve an overall coherent Assessment Plan, the assessment development
must be conducted as a top down approach throughout all levels of command.
Consequently, the assessment products at strategic level, especially the Strategic Design
with its objectives and effects, and the strategic assessment annex must be taken into
consideration at the Operational Level.
F-8
F-9
Appendix F
Adapted from Assessing Progress in Military Operations: Recommendations for Improvement, produced by United States Joint
Forces Command for Multinational Experiment 6. (Version 0.5, 24 July 2009).
See, for example, Sobel, M. E. (2000), Causal Inference in the Social Sciences. Journal of the American Statistical Association,
95(450), 647-651. Posovac, E., & Carey, R. (2007). Program Evaluation: Methods and cases (7th ed.).
F-10
Invalid choice of MOP: the real status of the action is not being captured.
Invalid assumptions regarding Actions to create Effect: something in the
Engagement Space has created a change, but not the Actions.
Invalid allocation of resources: less Action than planned was required to create
the Effect. This may also be a poor choice of MOP targets.
Invalid choice of MOE: the real system state is not being captured
Appendix F
Own force Actions are creating unintended and/or undesired effects that are
counteracting the intended, desired effect
Invalid choice of MOE: the correct system element is not being measured. This
may also be a poor choice of MOE target.
An unforeseen time lag exists: conditions for the Effect have been created, but the
Effect is not apparent yet.
F-12
The Data Collection Matrix is part of the Assessment Plan; certain assumptions are made
about the availability of data during the planning process. During execution of the
assessment process, problems may arise due to:
These are all reasons to revisit the Assessment Plan, and may lead to selection of
different data collection techniques or different MOP/MOE.
Release of Assessment Results
In all cases, Command approval is required to release any Assessment Results outside of
the procedures established in the OPLAN.
F-13
Appendix F
operations, operating principles, and cultures of those non-military actors who NATO is
likely to encounter or work with in the field.
The purpose of this section is to briefly describe the civilian sectors commonly
encountered on operations. Note, this section is not intended to be exhaustive, but
instead it represents the knowledge of the authors at the time of writing and is intended to
give only an indication.
There is much commonality between the ways that the military community and the
international development community approach analysis, planning, implementation
and assessment of progress. International development work focuses on long-term and
equitable economic growth, global health, agriculture and trade, democracy and
governance, conflict management, humanitarian assistance, and many other factors.
Creating and building sustainable, host-nation owned capabilities is a primary goal of
much international development work. Organisations such as the World Bank, the
United Nations Development Group, the European Commission and national
development agencies, such as the US Agency for International Development, are
informative places to look in order to learn more about the international development
community.
Further, many universities offer degree programs in international
development.
Peace operations comprise peacekeeping the provision of temporary post conflict
security by internationally mandated forces and peacebuilding those efforts
undertaken by the international community to help a war-torn society create a selfsustaining peace 8 . This multi-faceted community of interest includes military
organisations, but tends to be oriented towards much more than just military principles
and objectives. Some guiding principles of peacekeeping operations include consent of
the parties to the conflict, impartiality in dealings with the parties to the conflict (not be
confused with neutrality or inactivity), and non-use of force except in self-defence and
defence of the mandate9. Other success factors include legitimacy, credibility and local
ownership10.
Peacebuilding has become an overarching term for an entire range of actions designed to
contribute to building a culture of peace and can include activities such as the promotion
of a culture of justice, truth and reconciliation, capacity building and promotion of good
governance, supporting reform of security and justice institutions and socioeconomic
development 11 . For more information, see the documents footnoted below and their
originating
organisations
websites
(www.un.org/Depts/dpko/dpko
and
www.oecd.org/dac).
Also, organisations such as the US Institute of Peace
(www.usip.org) and the Pearson Peacekeeping Centre (www.peaceoperations.org)
publish extensively on the topic of Peace Operations.
8
F-14
More information can be found at the websites of the International Red Cross and Red
Crescent (www.icrc.org), the UN Office for the Coordination of Humanitarian Affairs
(www.ochaonline.un.org), international non-governmental organisations such as World
Vision, and Oxfam International, and national government agencies such as the US
Agency for International Development.
The Civilian Sector and Assessment
Within the humanitarian aid and development community, the function of Assessment is
generally known by the term monitoring and evaluation. Many of the major IOs,
NGOs, and government agencies have well established monitoring and evaluation
processes, and many have entire departments within their organisations to deal with this
task. Although there is a broad spectrum of terminologies and process within the civilian
monitoring and evaluation community, there have been some attempts to standardise
approaches by the Development Assistance Committee (DAC) of the Organisation for
Economic Cooperation and Development (OECD).
The DAC is an international forum of 24 countries where donor governments and
multilateral organisations such as the World Bank and the United Nations come
together to help partner countries reduce poverty and achieve the Millennium
Development Goals. The DAC issues analysis and guidance in key areas of development
and forges ties with other policy communities to co-ordinate efforts. Its members also
work together through peer review to assess each others aid policies and their
implementation so as to promote good practice. The DACs objective is to be the
definitive source of statistics on official development assistance (ODA).
12
www.globalhumanitarianassistance.org
Principles and Good Practice of Humanitarian Donorship, endorsed at the International Meeting on
Good Humanitarian Donorship, 17 June 2003.
14
Anderson, Mary B., Do No Harm: How Aid Can Support Peace - or War, 1999
13
F-15
Appendix F
In 1991 the Development Assistance Committee (DAC) of the OECD set out broad
principles for the evaluation process for DAC members15. These principles were refined
into five criteria that have been widely used in the evaluation of development initiatives
efficiency, effectiveness, impact, sustainability and relevance. Subsequently the criteria
were adapted for evaluation of complex emergencies16, becoming a set of seven criteria:
relevance/appropriateness, connectedness, coherence, coverage, efficiency, effectiveness,
and impact. The DAC criteria are intended to be a comprehensive and complementary set
of measures.
Using the DAC frameworks, many major IOs and NGOs developed monitoring and
evaluation frameworks. It is recommended that military assessment staff become familiar
with the OECD-DAC documents and the terminology employed. The OECD-DAC
published a terminology guide, available on the website (www.oecd.org).
Civilian Approaches to Assessment17
What NATO terms Assessment is akin to the civilian practice of Monitoring and
Evaluation (M&E). Civilian research, literature and practice in the field with regard to
M&E offer many lessons for military assessment practitioners. The following definitions
are generally accepted by the civilian community:
Monitoring: A continuous function that uses a systematic collection of data on
specified indicators to provide management and primary stakeholders of an
intervention with information regarding the use of allocated funds, the extent of
progress, the likely achievement of objectives and the obstacles that stand in
the way of improved performance.
What happened?
What is happening?
When did it happen?
Where did it happen?
Evaluation, on the other hand, tends to be a function that is more oriented towards
answering questions such as:
15
OECD-DAC (2000) DAC Criteria for Evaluating Development Assistance. Paris: OECD.
OECD-DAC (1999) Guidance for Evaluating Humanitarian Assistance in Complex Emergencies. Paris: OECD.
17
Adapted from USJFCOM (2010). Assessment of Progress in Military Operations: Considerations, Methodologies,
and Capabilities, version 2.0. Produced as part of the MNE 6 concept development and experimentation campaign.
16
F-16
Some similarities and differences between M&E are highlighted in Figure F-1 below18:
18
Input: The financial, human, and material resources used for the development
intervention.
F-17
Appendix F
Activity: Actions taken or work performed through which inputs, such as funds,
technical assistance and other types of resources are mobilised to produce specific
outputs.
Output: The products, capital goods and services which result from a development
intervention; may also include changes resulting from the intervention which are
relevant to the achievement of outcomes.
Outcome: The likely or achieved short-term and medium-term effects of an
interventions outputs.
Impact: Positive and negative, primary and secondary long-term effects produced by
a development intervention, directly or indirectly, intended or unintended.19
As demonstrated in the graphic above, monitoring and evaluation are both functions that
are meant to be fully integrated into a design, planning, management/execution and
assessment cycle. One example of this kind of end-to-end process model that may be of
19
Ibid.
F-18
There are many different types of monitoring and many different types of evaluation, all
designed to suit specific needs. In addition to the documents that have already been
referenced in previous recommendations, some documents, organisations and websites
related to this topic include:
Church, Cheyanne and Rogers, Mark, Designing for Results: Integrating Monitoring
and Evaluation in Conflict Transformation Programs, 2006
USAID, Monitoring and Evaluation in Post-Conflict Settings, 2006
Social Impact Toolkit on Monitoring and Evaluating Fragile States and Peace
building Programs
World Bank Independent Evaluation Group, www.worldbank.org/oed
OECD/DAC Network on Development Evaluation www.oecd.org
USAID EvalWeb, www.evalweb.usaid.gov
USAID Monitoring and Evaluation TIPS Documents
o Preparing a Performance Management Plan
o Selecting Performance Indicators
o Establishing Performance Targets
Monitoring and Evaluation News, www.mande.co.uk
The Measure Project, www.cpc.unc.edu/measure
F-19
Appendix F
F-20
Project,
APP
PENDIX G
MEAS
SURING PR
ROGRESS IN CONFL
LICT ENVIRONMENT
TS
1.
G
General
Figure
e G-1. Metric
cs
2.
F
Framework
k
a MPICE includes abbout 800 genneric, quantiitative outcoome metrics that measurre
a.
institutional cap
pacities andd drivers of
o conflict in five seectors: safe and securre
olitical modderation andd stable deemocracy, ruule of law
w, sustainablle
envirronment, po
econoomy, and so
ocial well-beeing (see Figgure G-2). This comprrehensive seet of outcom
me
G--1
Appeendix G
metriics (measurees of effectiiveness) enaables planneers to assesss mission prrogress in an
a
objecctive, system
matic, and holistic way.
b Develop
b.
pment of MP
PICE was spponsored byy the Departtment of Deffense, Uniteed
States Institute of Peace, US Agenncy for Intternational Developmennt (USAID),
Depaartment of State, and other U.S
S. governm
ment agenciees in coopperation witth
multiinational, no
on-governmeental organizzation (NGO
O), and acadeemic partnerss.
Figure
F
G-2. Institutional Capacities
C
and Drivers of
o Conflict
3.
T
Training
Sy
ystem
a The MP
a.
PICE Trainiing System is a computter-based traaining system
m that teachees
facts,, concepts, process
p
steps, analyticall skills, and strategies needed
n
to usee the MPICE
frameework effecttively (see Figure
F
G-3). The yelloow boxes shhow the threee main stepps
whenn using MPICE. Tailooring metriccs includes selecting annd adaptingg the generiic
metriics and then red-teamingg them, so thhey satisfy innformation needs
n
and arre appropriatte
for thhe mission and
a operationnal environm
ment. Methoodologies foor collectingg data includde
quanttitative data,, survey/pollling data, exxpert knowleedge, and coontent analyssis. Analyzinng
data includes weeighting mettrics, detectiing data pattterns and treends, and geenerating annd
evaluuating causall explanationns.
G-2
Com
mmanders Handbook
H
forr Assessmennt Planning and
a Executioon
G-3
Appendix G
Intentionally Blank
G-4
APPENDIX H
REFERENCES
The following are references for assessment:
1. Joint Publication (JP) 1-02, Department of Defense Dictionary of Military and
Associated Terms.
2.
3.
4.
5.
6.
7.
H-1
Appendix H
16. US Agency for International Development (USAID), Bureau for Democracy,
Conflict, and Humanitarian Assistance, Office of US Foreign Disaster Assistance, Field
Operations Guide for Disaster Assessment and Response.
17. Department of State, Operations, OPA Planning Cell, PRT Assessments, Planning
and Assessment User Guide.
18. Department of State, Bureau for International Narcotics and Law Enforcement
Affairs, Criminal Justice Sector Assessment Tool, Version 3.0, May 2009.
H-2
GLOSSARY
PART I ABBREVIATIONS AND ACRONYMS
AAR
AC
AWG
CCDR
CCMD
CJSART
COA
COG
combatant commander
combatant command
Criminal Justice Sector Assessment Rating Tool
course of action
center of gravity
DIME
DOTMLPF
FRAGORD
fragmentary order
ICAF
ICAT
ISR
J-2
J-3
J-5
JFC
JIPOE
JIOC
JISE
JOPP
JP
JPG
JTF
KD
knowledge development
LOO
line of operation
MISO
MOE
MOP
MPICE
NATO
NDAA
GL-1
Glossary
OPLAN
OPORD
OPT
operation plan
operation order
operational planning team
ROE
R&S
rules of engagement
reconstruction and stabilization
S/CRS
SecDef
SME
US
USAID
USG
USJFCOM
USSOCOM
United States
United States Agency for International Development
United States government
United States Joint Forces Command
United States Special Operations Command
GL-2
Glossary
PART II TERMS AND DEFINITIONS
adversary. A party acknowledged as potentially hostile to a friendly party and against
which the use of force may be envisaged. (JP 1-02. SOURCE JP 3-0)
assessment. 1. A continuous process that measures the overall effectiveness of
employing joint force capabilities during military operations. 2. Determination of the
progress toward accomplishing a task, creating an effect, or achieving an objective.
3. Analysis of the security, effectiveness, and potential of an existing or planned
intelligence activity. 4. Judgment of the motives, qualifications, and characteristics
of present or prospective employees or agents. (JP 1-02. SOURCE: JP 3-0)
base plan. In the context of joint operation planning level 2 planning detail, a type of
operation plan that describes the concept of operations, major forces, sustainment
concept, and anticipated timelines for completing the mission. It normally does not
include annexes or time-phased force and deployment data. (JP 1-02. SOURCE: JP 50)
battle damage assessment. The estimate of damage resulting from the application of
lethal or nonlethal military force. Battle damage assessment is composed of physical
damage assessment, functional damage assessment, and target system assessment.
Also called BDA. (JP 1-02. SOURCE: JP 3-0)
branch. 1. A subdivision of any organization. 2. A geographically separate unit of an
activity, which performs all or part of the primary functions of the parent activity on a
smaller scale. Unlike an annex, a branch is not merely an overflow addition. 3. An
arm or service of the Army. 4. The contingency options built into the base plan. A
branch is used for changing the mission, orientation, or direction of movement of a
force to aid success of the operation based on anticipated events, opportunities, or
disruptions caused by enemy actions and reactions. (JP 1-02. SOURCE: JP 5-0)
campaign. A series of related major operations aimed at achieving strategic and
operational objectives within a given time and space. (JP 1-02. SOURCE: JP 5-0)
campaign plan. A joint operation plan for a series of related major operations aimed at
achieving strategic or operational objectives within a given time and space. (JP 1-02.
SOURCE: JP 5-0)
center of gravity. The source of power that provides moral or physical strength,
freedom of action, or will to act. Also called COG. (JP 1-02. SOURCE: JP 3-0)
coalition. An ad hoc arrangement between two or more nations for common action. (JP
1-02. SOURCE: JP 5-0)
combatant command. A unified or specified command with a broad continuing mission
under a single commander established and so designated by the President, through the
GL-3
Glossary
Secretary of Defense and with the advice and assistance of the Chairman of the Joint
Chiefs of Staff. Combatant commands typically have geographic or functional
responsibilities. Also called CCMD. (JP 1-02. SOURCE: JP 5-0)
combatant commander. A commander of one of the unified or specified combatant
commands established by the President. Also called CCDR. See also combatant
command. (JP 1-02)
combat assessment. The determination of the overall effectiveness of force employment
during military operations. Combat assessment is composed of three major
components: (a) battle damage assessment; (b) munitions effectiveness assessment;
and (c) reattack recommendation. Also called CA. (JP 1-02)
commanders intent. A concise expression of the purpose of the operation and the
desired end state. It may also include the commanders assessment of the adversary
commanders intent and an assessment of where and how much risk is acceptable
during the operation. (JP 1-02. SOURCE: JP 3-0)
concept of operations. A verbal or graphic statement that clearly and concisely
expresses what the joint force commander intends to accomplish and how it will be
done using available resources. The concept is designed to give an overall picture of
the operation. Also called commanders concept or CONOPS. (JP 1-02. SOURCE:
JP 5-0)
concept plan. In the context of joint operation planning level 3 planning detail, an
operation plan in an abbreviated format that may require considerable expansion or
alteration to convert it into a complete operation plan or operation order. Also called
CONPLAN. (JP 1-02. SOURCE: JP 5-0)
contingency. A situation requiring military operations in response to natural disasters,
terrorists, subversives, or as otherwise directed by appropriate authority to protect US
interests. (JP 1-02. SOURCE: JP 5-0)
contingency operation. A military operation that is either designated by the SecDef as a
contingency operation or becomes a contingency operation as a matter of law (title
10, United States Code, section 101[a][13]. It is a military operation that: a. is
designated by the SecDef as an operation in which members of the Armed Forces are
or may become involved in military actions, operations, or hostilities against an
enemy of the United States or against an opposing force; or b. is created by definition
of law. Under Title 10 United States Code, Section 101 [a][13][B], a contingency
operations exists if a military operation results in the (1) call-up to (or retention on)
active duty of members of the uniformed Services under certain enumerated statutes
(Title 10, United States Code, Sections 688, 12301(a), 12302, 12304, 12305, 12406,
or 331-335); and (2) the call-up to (or retention on) active duty of members of the
uniformed Services under other (non-enumerated) statutes during war or national
GL-4
Glossary
emergency declared by the President or Congress. See also contingency; operation.
(JP 1-02. SOURCE: JP 1)
course of action. 1. Any sequence of activities that an individual or unit may follow. 2.
A possible plan open to an individual or commander that would accomplish, or is
related to the accomplishment of the mission. 3. The scheme adopted to accomplish a
job or mission. 4. A line of conduct in an engagement. 5. A product of the Joint
Operation Planning and Execution System concept development phase and the
course-of- action determination steps of the joint operation planning process. Also
called COA. (JP 1-02. SOURCE: JP 5-0)
crisis. An incident or situation involving a threat to a nation, its territories, citizens,
military forces, possessions, or vital interests that develops rapidly and creates a
condition of such diplomatic, economic, political, or military importance that
commitment of military forces and resources is contemplated in order to achieve
national objectives. (JP 1-02. SOURCE: JP 3-0)
crisis action planning. One of the two types of joint operation planning. The Joint
Operation Planning and Execution System process involving the time-sensitive
development of joint operation plans and operation orders for the deployment,
employment, and sustainment of assigned and allocated forces and resources in
response to an imminent crisis. Crisis action planning is based on the actual
circumstances that exist at the time planning occurs. Also called CAP. (JP 1-02.
SOURCE: JP 5-0)
deliberate planning. The Adaptive Planning and Execution System planning activities
that routinely occur in non-crisis situations. (JP 1-02. SOURCE: JP 5-0)
deficiency analysis.
A step within the assessment process in which assessed
inadequacies in the attainment of desired effects are examined at the MOE, indicator
and MOP level. (This term and its definition are applicable only in the context of
this publication and cannot be referenced outside this publication.)
effect. 1. The physical or behavioral state of a system that results from an action, a set of
actions, or another effect. 2. The result, outcome, or consequence of an action. 3. A
change to a condition, behavior, or degree of freedom. (JP 1-02. SOURCE: JP 3-0)
end state. The set of required conditions that defines achievement of the commanders
objectives. (JP 1-02. SOURCE: JP 3-0)
fires. The use of weapon systems to create a specific lethal or nonlethal effect on a
target. (JP 1-02. SOURCE: JP 3-09)
fragmentary order. An abbreviated form of an operation order issued as needed after an
operation order to change or modify that order or to execute a branch or sequel to that
order. Also called FRAGORD. (JP 1-02. SOURCE: JP 5-0)
GL-5
Glossary
GL-6
Glossary
command (command authority) or operational control over a joint force. Also called
JFC. (JP 1-02. SOURCE: JP 1)
joint intelligence preparation of the operational environment. The analytical process
used by joint intelligence organizations to produce intelligence assessments,
estimates, and other intelligence products in support of the joint force commanders
decision making process. It is a continuous process that includes defining the
operational environment, describing the effects of the operational environment,
evaluating the adversary, and determining and describing adversary potential courses
of action. Also called JIPOE. (JP 1-02. SOURCE: JP 2-01.3)
joint intelligence operations center. An interdependent, operational intelligence
organization at the Department of Defense, combatant command, or joint task force
(if established) level, that is integrated with national intelligence centers, and capable
of accessing all sources of intelligence impacting military operations planning,
execution, and assessment. Also called JIOC. (JP 1-02. SOURCE: JP 2-0)
joint intelligence support element. A subordinate joint force element whose focus is on
intelligence support for joint operations, providing the joint force commander, joint
staff, and components with the complete air, space, ground, and maritime adversary
situation. Also called JISE. (JP 1-02. SOURCE: JP 2-01)
joint interagency coordination group. An interagency staff group that establishes
regular, timely, and collaborative working relationships between civilian and military
operational planners. Composed of US Government civilian and military experts
accredited to the combatant commander and tailored to meet the requirements of a
supported combatant commander, the joint interagency coordination group provides
the combatant commander with the capability to collaborate at the operational level
with other US Government civilian agencies and departments. Also called JIACG.
(JP 1-02. SOURCE: JP 3-08)
joint operation planning. Planning activities associated joint military operations by
combatant commanders and their subordinate joint force commanders in response to
contingencies and crises. Joint operation planning includes planning for the
mobilization, deployment, employment, sustainment, redeployment, and
demobilization of joint forces. (JP 1-02. SOURCE: JP 5-0)
joint operation planning process. An orderly, analytical process that consists of a
logical set of steps to analyze a mission; develop, analyze, and compare alternative
courses of action against criteria of success and each other; select the best course of
action; and produce a joint operation plan or order. Also called JOPP (JP 1-02.
SOURCE: JP 5-0)
joint operations. A general term to describe military actions conducted by joint forces,
or by Service forces in relationships (e.g., support, coordinating authority), which, of
themselves, do not establish joint forces. (JP 1-02. SOURCE: JP 3-0)
GL-7
Glossary
GL-8
Glossary
operation. 1. A series of tactical actions with a common purpose or unifying theme. (JP
1) 2. A military action or the carrying out of a strategic, operational, tactical, service,
training, or administrative military mission. (JP 1-02. SOURCE: JP 3-0)
operational art. 1. A series of tactical actions with a common purpose or unifying
theme. (JP 1) 2. A military action or the carrying out of a strategic, operational,
tactical, service, training, or administrative military mission. (JP 1-02. SOURCE: JP
3-0)
operational environment. A composite of the conditions, circumstances, and influences
that affect the employment of capabilities and bear on the decisions of the
commander. (JP 1-02. SOURCE: JP 3-0)
operational level of war. The level of war at which campaigns and major operations are
planned, conducted, and sustained to achieve strategic objectives within theaters or
other operational areas.JP 1-02. SOURCE: JP 3-0)
operation order. A directive issued by a commander to subordinate commanders for the
purpose of effecting the coordinated execution of an operation. Also called OPORD.
(JP 1-02. SOURCE: JP 5-0)
operation plan. 1. Any plan for the conduct of military operations prepared in response
to actual and potential contingencies. 2. In the context of joint operation planning
level 4 planning detail, a complete and detailed joint plan containing a full description
of the concept of operations, all annexes applicable to the plan, and a time-phased
force and deployment data. It identifies the specific forces, functional support, and
resources required to execute the plan and provide closure estimates for their flow
into the theater. Also called OPLAN. (JP 1-02. SOURCE: JP 5-0)
phase. In joint operation planning, a definitive stage of an operation or campaign during
which a large portion of the forces and capabilities are involved in similar or mutually
supporting activities for a common purpose. (JP 1-02. SOURCE: JP 5-0)
support. 1. The action of a force that aids, protects, complements, or sustains another
force in accordance with a directive requiring such action. 2. A unit that helps
another unit in battle. 3. An element of a command that assists, protects, or supplies
other forces in combat. (JP 1-02. SOURCE: JP 1)
system. A functionally, physically, and/or behaviorally related group of regularly
interacting or interdependent elements; that group of elements forming a unified
whole. (JP 1-02. SOURCE: JP 3-0)
targeting. The process of selecting and prioritizing targets and matching the appropriate
response to them, considering commanders objectives, operational requirements,
capabilities, and limitations. (JP 1-02. SOURCE: JP 3-60)
GL-9
Glossary
task assessment. Measures the completion or accomplishment of tasks, or a set of tasks,
through the use of measures of performance. (This term and its definition are
applicable only in the context of this publication and cannot be referenced outside this
publication.)
theater. The geographical area for which a commander of a combatant command has
been assigned responsibility. (JP 1-02. SOURCE: JP 1)
theater of operations. An operational area defined by the geographic combatant
commander for the conduct or support of specific military operations. Multiple
theaters of operations normally will be geographically separate and focused on
different missions. Theaters of operations are usually of significant size, allowing for
operations in depth and over extended periods of time. Also called TO. (JP 1-02.
SOURCE: JP 3-0)
theater of war. Defined by the Secretary of Defense or the geographic combatant
commander, the area of air, land, and water that is, or may become, directly involved
in the conduct of the war. A theater of war does not normally encompass the
geographic combatant commanders entire area of responsibility and may contain
more than one theater of operations. (JP 1-02. SOURCE: JP 3-0)
unified command. A command with a broad continuing mission under a single
commander and composed of significant assigned components of two or more
Military Departments, that is established and so designated by the President through
the Secretary of Defense with the advice and assistance of the Chairman of the Joint
Chiefs of Staff. Also called unified combatant command. (JP 1-02. SOURCE: JP
1)
GL-10
Intentionally Blank
Developed by the
Joint Staff, J-7
Joint and Coalition Warfighting