Module 3-Health Technology, Assessment Study Designs, and Evidences

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

DMC COLLEGE FOUNDATION INC.

Health Technology Assessment and Health Policy


ALLIED MEDICAL SCIENCES DEPARTMENT (With Pharmacoeconomics)
BS PHARMACY Lecture Activity Sheet #3

Name: _____________________________________________________ Class number: _________


Section: ____________ Schedule: ________________________ Date: ________________
MODULE NO. 3
HEALTH TECHNOLOGY, ASSESSMENT STUDY DESIGNS, AND EVIDENCES
Learning Objectives
At the end of the activity, you will be able to:
• Discuss Health Technology
• Elaborate health technology assessment
• Explain what HTA assess
PRIMARY DATA METHODS
A. Primary Data Studies: Diverse Attributes
Primary data methods involve collection of original data, ranging from more scientifically rigorous approaches
for determining the causal effect of health technologies, such as randomized controlled trials (RCTs), to less
rigorous ones, such as case series. These study designs can be described and categorized based on multiple
attributes or dimensions, e.g.:

• Comparative vs. non-comparative


• Separate (i.e., external) control group vs. no separate (i.e., internal) control group
• Participants (study populations /groups) defined by a health outcome vs. by having been exposed to, or
received or been assigned, an intervention
• Prospective vs. retrospective
• Interventional vs. observational
• Experimental vs. non-experimental
• Random assignment vs. non-random assignment of patients to treatment and control groups
All experimental studies are, by definition, interventional studies. Some non-experimental studies can be
interventional, e.g., if investigators assign a technology to a patient population but without a control group or with
a non-randomized control group, and then assess their outcomes. An interventional cross-sectional design can
be used to assess the accuracy of a diagnostic test. Some study designs are better at rigorous demonstration of
causality in well-defined circumstances, such as the RCT. Other study designs may be better for reflecting real-
world practice, such as pragmatic clinical trials and some observational studies, such as cohort, cross-sectional,
or case-control studies using data from registries, surveillance, electronic health (or medical) records, and
payment claims.
Box III-1 categorizes various
types of primary data studies as
experimental and non-
experimental. Researchers have
developed various frameworks,
schemes, and other tools for
classifying study designs, such as
for the purpose of conducting
systematic reviews (Hartling
2010).

Prepared by: Princess C. Cedeño, RPh


DMC COLLEGE FOUNDATION INC. Health Technology Assessment and Health Policy
ALLIED MEDICAL SCIENCES DEPARTMENT (With Pharmacoeconomics)
BS PHARMACY Lecture Activity Sheet #3

Name: _____________________________________________________ Class number: _________


Section: ____________ Schedule: ________________________ Date: ________________
B. Assessing the Quality of Primary Data Studies
Our confidence that the estimate of a treatment effect, accuracy of a screening or diagnostic test, or other impact
of a health care technology that is generated by a study is correct reflects our understanding of the quality of the
study. For various types of interventions, we examine certain attributes of the design and conduct of a study to
assess the quality of that study. For example, some of the attributes or criteria that are commonly used to assess
the quality of studies for demonstrating the internal validity of the impact of therapies on health outcomes are the
following:
• Prospective, i.e., following a study population over time as it receives an intervention or exposure and
experiences outcomes, rather than retrospective design
• Experimental rather than observational
• Controlled, i.e., with one or more comparison groups, rather than uncontrolled
• Contemporaneous control groups rather than historical ones
• Internal (i.e., managed within the study) control groups rather than external ones
• Allocation of concealment of patients to intervention and control groups
• Randomized assignment of patients to intervention and control groups
• Blinding of patients, clinicians, and investigators as to patient assignment to intervention and control
groups Large enough sample size (number of patients/participants) to detect true treatment effects
with statistical significance
• Minimal patient drop-outs or loss to follow-up of patients (or differences in these between intervention
and control groups) for the duration of the study
• Consistency of pre-specified study protocol (patient populations, assignment to intervention and control
groups, regimens, etc.) and outcome measures with the reported (post-study) protocol and outcome
measures
Similarly, some attributes that are commonly used for assessing the external validity of the impact of therapies
and other technologies on health outcomes include:
• Flexible entry criteria to identify/enroll patient population that is representative of patient diversity likely to
be offered the intervention in practice, including demographic characteristics, risk factors, disease
stage/severity, comorbidities
• Large enough patient population to conduct meaningful subgroup analyses
• Dosing, regimen, technique, and delivery of the intervention consistent with anticipated practice
Comparator is standard of care or other relevant, clinically acceptable (not-substandard) Intervention
• Dosing, regimen, or other forms of delivering the comparator consistent with standard care Patient
monitoring and efforts to maintain patient adherence comparable to those in practice
• Accompanying/concurrent/ancillary care similar to what will be provided in practice
• Training, expertise, and skills of clinicians and other health care providers similar to those available or
feasible for providers anticipated to deliver the intervention
• Selection of outcome measures relevant to those experienced by and important to intended patient
groups
• Systematic effort to follow-up on all patients to minimize attrition
• Intention-to-treat analysis used to account for all study patients
• Study duration consistent with the course/episode of disease/condition in practice in order to detect
outcomes of importance to patients and clinicians
• Multiple study sites representative of type/level of health care settings and patient and clinician
experience anticipated in practice

Prepared by: Princess C. Cedeño, RPh


DMC COLLEGE FOUNDATION INC. Health Technology Assessment and Health Policy
ALLIED MEDICAL SCIENCES DEPARTMENT (With Pharmacoeconomics)
BS PHARMACY Lecture Activity Sheet #3

Name: _____________________________________________________ Class number: _________


Section: ____________ Schedule: ________________________ Date: ________________
The commonly recognized attributes of study quality noted above that strengthen the internal and external
validity of primary data studies are derived from an extensive body of methodological concepts and principles,
including those summarized below: confounding and the need for controls, prospective vs. retrospective design,
sources of bias, random error, and selected other factors.
1. Types of Validity in Methods and Measurement
Whether they are experimental or non-experimental in design, studies vary in their ability to produce valid
findings. Validity refers to how well a study or data collection instrument measures what it is intended to measure.
Understanding different aspects of validity help in comparing the strengths and weaknesses of alternative study
designs and our confidence in the findings generated by those studies. Although these concepts are often
addressed in reference to primary data methods, they generally apply as well to integrative methods.
Internal validity refers to the extent to which the results of a study accurately represent the causal relationship
between an intervention and an outcome in the particular circumstances of that study. This includes the extent
to which the design and conduct of a study minimize the risk of any systematic (non-random) error (i.e., bias) in
the study results. Internal validity can be suspect when biases in the design or conduct of a clinical trial or other
studies could have affected outcomes, thereby causing the study results to deviate from the true magnitude of
the treatment effect. True experiments such as RCTs generally have high internal validity.
External validity refers to the extent to which the results of a study conducted under particular circumstances
can be generalized (or are applicable) to other circumstances. When the circumstances of a particular study
(e.g., patient characteristics, the technique of delivering a treatment, or the setting of care) differ from the
circumstances of interest (e.g., patients with different characteristics, variations in the technique of delivering a
treatment, or alternative settings of care), the external validity of the results of that study may be limited.
Construct validity refers to how well a measure is correlated with other accepted measures of the construct of
interest (e.g., pain, anxiety, mobility, quality of life), and discriminates between groups known to differ according
to the construct. Face validity is the ability of a measure to represent reasonably (i.e., to be acceptable “on its
face”) a construct (i.e., a concept, trait, or domain of interest) as judged by someone with knowledge or expertise
in the construct.
Content validity refers to the degree to which the set of items of a data collection instrument is known to
represent the range or universe of meanings or dimensions of a construct of interest. For example, how well do
the domains of health-related quality of life index for rheumatic arthritis represent the aspects of quality of life or
daily functioning that are important to patients with rheumatoid arthritis?
Criterion validity refers to how well a measure, including its various domains or dimensions, is correlated with
a known gold standard or definitive measurement if one exists. The similar concept of concurrent validity refers
to how well a measure correlates with a previously validated one, and the ability of a measure to accurately
differentiate between different groups at the time the measure is applied. Predictive validity refers to the ability
to use differences in a measure of a construct to predict future events or outcomes. It may be considered a
subtype of criterion validity.
Convergent validity refers to the extent to which different measures that are intended to measure the same
construct yield similar results, such as two measures of quality of life. Discriminant validity concerns whether
different measures that are intended to measure different constructs actually fail to be positively associated with
each other. Convergent validity and discriminant validity contribute to, or can be considered subtypes of,
construct validity.

Prepared by: Princess C. Cedeño, RPh


DMC COLLEGE FOUNDATION INC. Health Technology Assessment and Health Policy
ALLIED MEDICAL SCIENCES DEPARTMENT (With Pharmacoeconomics)
BS PHARMACY Lecture Activity Sheet #3

Name: _____________________________________________________ Class number: _________


Section: ____________ Schedule: ________________________ Date: ________________
2. Confounding and the Need for Controls
Confounding occurs when any factor that is associated with an intervention has an impact on an outcome that
is independent of the impact of the intervention. As such, confounding can “mask” or muddle the true impact of
an intervention. To diminish any impact of confounding factors, it is necessary to provide a basis for comparing
what happens to patients who receive an intervention to those that do not.
The main purpose of control groups is to enable isolating the impact of an intervention of interest on patient
outcomes from the impact of any extraneous factors. The composition of the control group is intended to be as
close as possible to that of the intervention group, and both groups are managed as similarly as possible, so that
the only difference between the groups is that one receives the intervention of interest and the other does not.
In controlled clinical trials, the control groups may receive a current standard of care, no intervention, or a
placebo.
For a factor to be a confounder in a controlled trial, it must differ for the intervention and control groups and be
predictive of the treatment effect, i.e., it must have an impact on the treatment effect that is independent of the
intervention of interest. Confounding can arise due to differences between the intervention and control groups,
such as differences in baseline risk factors at the start of a trial or different exposures during the trial that could
affect outcomes. Investigators may not be aware of all potential confounding factors in a trial. Examples of
potentially confounding factors are age, the prevalence of comorbidities at baseline, and different levels of
ancillary care. To the extent that potentially confounding factors are present at different rates between
comparison groups, a study is subject to selection bias (described below).
Most controlled studies use contemporaneous controls alongside (i.e., constituted and followed simultaneously
with) intervention groups. Investigators sometimes rely on historical control groups. However, a historical control
group is subject to known or unknown inherent differences (e.g., risk factors or other prognostic factors) from a
current intervention group, and environmental or other contextual differences arising due to the passage of time
that may confound outcomes. In some instances, including those noted below, historical controls have sufficed
to demonstrate definitive treatment effects. In a crossover design study, patients start in one group (intervention
or control) and then are switched to the other (sometimes multiple times), thereby acting as their own controls,
although such designs are subject to certain forms of bias.
Various approaches are used to ensure that intervention and control groups comprise patients with similar
characteristics, diminishing the likelihood that baseline differences between them will confound observed
treatment effects. The best of these approaches is the randomization of patients to intervention and control
groups. Random allocation diminishes the impact of any potentially known or unrecognized confounding factors
by tending to distribute those factors evenly across the groups to be compared. “Pseudo-randomization”
approaches such as alternate assignment or using birthdays or identification numbers to assign patients to
intervention and control groups can be vulnerable to confounding.
3. Prospective vs. Retrospective Design
Prospective studies are planned and implemented by investigators using real-time data collection. These
typically involve the identification of one or more patient groups according to specified risk factors or exposures,
followed by collection of baseline (i.e., initial, prior to intervention) data, delivery of interventions of interest and
controls, collecting follow-up data, and comparing baseline to follow-up data for the patient groups. In
retrospective studies, investigators collect samples of data from past interventions and outcomes involving one
or more patient groups.

Prepared by: Princess C. Cedeño, RPh


DMC COLLEGE FOUNDATION INC. Health Technology Assessment and Health Policy
ALLIED MEDICAL SCIENCES DEPARTMENT (With Pharmacoeconomics)
BS PHARMACY Lecture Activity Sheet #3

Name: _____________________________________________________ Class number: _________


Section: ____________ Schedule: ________________________ Date: ________________
Prospective studies are usually subject to fewer types of confounding and bias than retrospective studies. In
particular, retrospective studies are more subject to selection bias than prospective studies. In retrospective
studies, patients’ interventions and outcomes have already transpired and been recorded, raising opportunities
for intentional or unintentional selection bias on the part of investigators. In prospective studies, patient
enrollment and data collection can be designed to reduce bias (e.g., selection bias and detection bias), which is
an advantage over most retrospective studies. Even so, the logistical challenges of maintaining the blinding of
patients and investigators are considerable and unblinding can introduce performance and detection bias.
Prospective and retrospective studies have certain other relative advantages and disadvantages that render
them more or less useful for certain types of research questions. Both are subject to certain types of missing or
otherwise limited data. As retrospective studies primarily involve the selection and analysis of existing data, they
tend to be far less expensive than prospective studies. However, their dependence on existing data makes it
difficult to fill data gaps or add data fields to data collection instruments, although they can rely in part on importing
and adjusting data from other existing sources. Given the costs of enrolling enough patients and collecting
sufficient data to achieve statistical significance, prospective studies tend to be more suited to investigating
health problems that are prevalent and yield health outcomes or other events that occur relatively frequently and
within short follow-up periods. The typically shorter follow-up periods of prospective studies may subject them to
seasonal or other time-dependent biases, whereas retrospective studies can be designed to extract data from
longer time spans. Retrospective studies offer the advantage of being able to canvass large volumes of data
over extended time periods (e.g., from registries, insurance claims, and electronic health records) to identify
patients with specific sets of risk factors or rare or delayed health outcomes, including certain adverse events.
4. Sources of Bias
The quality of a primary data study determines our confidence that the estimated treatment effect in a primary
data study is correct. Bias refers to any systematic (i.e., not due to random error) deviation in an observation
from the true nature of an event. In clinical trials, bias may arise from any factor that systematically distorts
(increases or decreases) the observed magnitude of an outcome (e.g., treatment effect or harm) relative to the
true magnitude of the outcome. As such, bias diminishes the accuracy (though not necessarily the precision;
see discussion below) of an observation. Biases may arise from inadequacies in the design, conduct, analysis,
or reporting of a study.
Major types of bias in comparative primary data studies are described below, including selection bias,
performance bias, detection bias, attrition bias, and reporting bias (Higgins, Altman, Gøtzsche 2011; Higgins,
Altman, Sterne 2011; Viswanathan 2014). Also noted are techniques and other study attributes that tend to
diminish each type of bias. These attributes for diminishing bias also serve as criteria for assessing the quality
of individual studies.
Selection bias refers to systematic differences between baseline characteristics of the groups that are
compared, which can arise from, e.g., physician assignment of patients to treatments, patient self-selection of
treatments, or association of treatment assignment with patient clinical characteristics or demographic factors.
Among the means for diminishing selection bias are random sequence generation (random allocation of patients
to treatment and control groups) and allocation concealment for RCTs, control groups to diminish confounders
in cohort studies, and case matching in case-control studies.
Allocation concealment refers to the process of ensuring that the persons assessing patients for potential entry
into a trial, as well as the patients, do not know whether any patient will be allocated to an intervention group or
control group. This helps to prevent the persons who manage the allocation, or the patients, from influencing
(intentionally or not) the assignment of a patient to one group or another. Patient allocation based on personal

Prepared by: Princess C. Cedeño, RPh


DMC COLLEGE FOUNDATION INC. Health Technology Assessment and Health Policy
ALLIED MEDICAL SCIENCES DEPARTMENT (With Pharmacoeconomics)
BS PHARMACY Lecture Activity Sheet #3

Name: _____________________________________________________ Class number: _________


Section: ____________ Schedule: ________________________ Date: ________________
identification numbers, birthdates, or medical record numbers may not ensure concealment. Better methods
include centralized randomization (i.e., managed at one site rather than at each enrollment site); sequentially
numbered, opaque, sealed envelopes; and coded medication bottles or containers.
Performance bias refers to systematic differences between comparison groups in the care that is provided, or
in exposure to factors other than the interventions of interest. Techniques for diminishing performance bias
include blinding of patients and providers (in RCTs and other controlled trials in particular), adhering to the study
protocol, and sustaining patients’ group assignments.

Detection (or ascertainment) bias refers to systematic differences between groups in how outcomes are
assessed. These differences may arise due to, e.g., inadequate blinding of outcome assessors regarding patient
treatment assignment, reliance on patient or provider recall of events (also known as recall bias), inadequate
outcome measurement instruments, or faulty statistical analysis. Whereas allocation concealment is intended to
ensure that persons who manage patient allocation, as well as the patients themselves, do not influence patient
assignment to one group or another, blinding refers to preventing anyone who could influence the assessment
of outcomes from knowing which patients have been assigned to one group or another. Knowledge of patient
assignment itself can affect outcomes as experienced by patients or assessed by investigators. The techniques
for diminishing detection bias include blinding of outcome assessors including patients, clinicians, investigators,
and/or data analysts, especially for subjective outcome measures, and validated and reliable outcome
measurement instruments and techniques.

Attrition bias refers to systematic differences between comparison groups in withdrawals (drop-outs) from a
study, loss to follow-up, or other exclusion of patients/participants and how these losses are analyzed. Ignoring
these losses or accounting for them differently between groups can skew study findings, as patients who
withdraw or are lost to follow-up may differ systematically from those patients who remain for the duration of the
study. Indeed, patients’ awareness of whether they have been assigned to a particular treatment or control group
may differentially affect their likelihood of dropping out of a trial. Techniques for diminishing attrition bias include
blinding of patients as to treatment assignment, completeness of follow-up data for all patients, and intention-to-
treat analysis (with imputations for missing data as appropriate).
Reporting bias refers to systematic differences between reported and unreported findings, including, e.g.,
differential reporting of outcomes between comparison groups and incomplete reporting of study findings (such
as reporting statistically significant results only). Also, narrative, and systematic reviews that do not report search
strategies or disclose potential conflicts of interest raise concerns about reporting bias as well as selection bias
(Roundtree 2009). Techniques for diminishing reporting bias include thorough reporting of outcomes consistent
with outcome measures specified in the study protocol, including attention to documentation and rationale for
any post hoc (after the completion of data collection) analyses not specified prior to the study, and reporting of
literature search protocols and results for review articles. Reporting bias, which concerns differential or
incomplete reporting of findings in individual studies, is not the same as publication bias, which concerns the
extent to which all relevant studies on given topic proceed to publication.

Prepared by: Princess C. Cedeño, RPh


DMC COLLEGE FOUNDATION INC. Health Technology Assessment and Health Policy
ALLIED MEDICAL SCIENCES DEPARTMENT (With Pharmacoeconomics)
BS PHARMACY Lecture Activity Sheet #3

Name: _____________________________________________________ Class number: _________


Section: ____________ Schedule: ________________________ Date: ________________
5. Random Error

In contrast to the systematic effects of various types of bias, random error is a source of non-systematic
deviation of an observed treatment effect or other outcome from a true one. Random error results from chance
variation in the sample of data collected in a study (i.e., sampling error). The extent to which an observed
outcome is free from random error is precision. As such, precision is inversely related to random error.

Random error can be reduced, but it cannot be eliminated. P-values and confidence intervals account for the
extent of random error, but they do not account for systematic error (bias). The main approach to reducing
random error is to establish large enough sample sizes (i.e., numbers of patients in the intervention and control
groups of a study) to detect a true treatment effect (if one exists) at acceptable levels of statistical significance.
The smaller the true treatment effect, the more patients may be required to detect it. Therefore, investigators
who are planning an RCT or other study consider the estimated magnitude of the treatment effect that they are
trying to detect at an acceptable level of statistical significance, and then “power” (i.e., determine the necessary
sample size of) the study accordingly. Depending on the type of treatment effect or other outcome being
assessed, another approach to reducing random error is to reduce variation in an outcome for each patient by
increasing the number of observations made for each patient. Random error also may be reduced by improving
the precision of the measurement instrument used to take the observations (e.g., a more precise diagnostic test
or instrument for assessing patient mobility).
6. Role of Selected Other Factors

Some researchers contend that if individual studies are to be assembled into a body of evidence for a systematic
review, precision should be evaluated not at the level of individual studies, but when assessing the quality of the
body of evidence. This is intended to avoid double-counting limitations in precision from the same source
(Viswanathan 2014).

In addition to evaluating internal validity of studies, some instruments for assessing the quality of individual
studies evaluate external validity. However, by definition, the external validity of a study depends not only on
its inherent attributes, but on the nature of an evidence question for which the study is more or less relevant. An
individual study may have high external validity for some evidence questions and low external validity for others.
Some quality assessment tools for individual studies account for funding source (or sponsor) of a study and
disclosed conflicts of interest (e.g., on the part of sponsors or investigators) as potential sources of bias. Rather
than being direct sources of bias themselves, a funding source or a person with a disclosed conflict of interest
may induce bias indirectly, e.g., in the form of certain types of reporting bias or detection bias. Also, whether the
funding source of research comes is a government agency, non-profit organization, or health technology
company does not necessarily determine whether it induces bias. Of course, all of these potential sources of
bias should be systematically documented (Viswanathan 2014).

Prepared by: Princess C. Cedeño, RPh


DMC COLLEGE FOUNDATION INC. Health Technology Assessment and Health Policy
ALLIED MEDICAL SCIENCES DEPARTMENT (With Pharmacoeconomics)
BS PHARMACY Lecture Activity Sheet #3

Name: _____________________________________________________ Class number: _________


Section: ____________ Schedule: ________________________ Date: ________________

C. Instruments for Assessing Quality of Individual Studies

A variety of assessment instruments are available to assess the quality of individual studies. Many of these are
for assessing internal validity or risk of bias for benefits and harms; others focus on assessing external validity.
These include instruments for assessing particular types of studies (e.g., RCTs, observational studies) and
certain types of interventions (e.g., screening, diagnosis, and treatment).

A systematic review identified more than 20 scales (and their modifications) for assessing the quality of RCTs
(Olivo 2008). Although most of these had not been rigorously developed or tested for validity and reliability, the
systematic review found that one of the original scales, the Jadad Scale (Jadad 1996).

D. Strengths and Limitations of RCTs


For demonstrating the internal validity of a causal relationship between an intervention and one or more
outcomes of interest, the well-designed, blinded (where feasible), appropriately powered, well-conducted, and
properly reported RCT has dominant advantages over other study designs. Among these, the RCT minimizes
selection bias in that any enrolled patient has the same probability, due to randomization, of being assigned to
an intervention group or control group. This also minimizes the potential impact of any known or unknown
confounding factors (e.g., risk factors present at baseline), because randomization tends to distribute such
confounders evenly across the groups to be compared.

When the sample size of an RCT is calculated to achieve sufficient statistical power, it minimizes the probability
that the observed treatment effect will be subject to random error. Further, especially with larger groups, the
randomization enables patient subgroup comparisons between intervention and control groups. The primacy of
the RCT remains even in an era of genomic testing and expanding use of biomarkers to better target selection
of patients for adaptive clinical trials of new drugs and biologics, and advances in computer‐based modeling that
may replicate certain aspects of RCTs (Ioannidis 2013).
Despite its advantages for demonstrating the internal validity of causal relationships, the RCT is not the best
study design for all evidence questions. Like all methods, RCTs have limitations. RCTs can have limitations
regarding external validity. The relevance or impact of these limitations varies according to the purposes and
circumstances of the study. In order to help inform healthcare decisions in real-world practice, evidence from
RCTs and other experimental study designs should be augmented by evidence from other types of studies.
E. Different Study Designs for Different Questions
RCTs are not the best study design for answering all evidence questions of potential relevance to an HTA. As
noted in Box III-11, other study designs may be preferable for different questions. For example, the prognosis
for a given disease or condition may be based on a follow-up study of patient cohorts at uniform points in the
clinical course of a disease. Case-control studies, which are usually retrospective, are often used to identify risk
factors for diseases, disorders, and adverse events. The accuracy of a new diagnostic test (though not its
ultimate effect on health outcomes) may be determined by a cross-over study in which patients suspected of
having a disease or disorder receive both the new (“index”) test and the “gold standard” test. Non-randomized
trials or case series may be preferred for determining the effectiveness of interventions for otherwise fatal
conditions, i.e., where little or nothing is to be gained by comparison to placebos or known ineffective treatments.
Surveillance and registries are used to determine the incidence of rare or delayed adverse events that may be
associated with an intervention. For incrementally modified technologies posing no known additional risk,
registries may be appropriate for determining safety and effectiveness.

Prepared by: Princess C. Cedeño, RPh


DMC COLLEGE FOUNDATION INC. Health Technology Assessment and Health Policy
ALLIED MEDICAL SCIENCES DEPARTMENT (With Pharmacoeconomics)
BS PHARMACY Lecture Activity Sheet #3

Name: _____________________________________________________ Class number: _________


Section: ____________ Schedule: ________________________ Date: ________________

Prepared by: Princess C. Cedeño, RPh

You might also like