Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

remote sensing

Article
Feasibility of Unmanned Aerial Vehicle Optical
Imagery for Early Detection and Severity Assessment
of Late Blight in Potato
Marston Héracles Domingues Franceschini 1, * , Harm Bartholomeus 1 ,
Dirk Frederik van Apeldoorn 2 , Juha Suomalainen 3 and Lammert Kooistra 1
1 Laboratory of Geo-Information Science and Remote Sensing, Wageningen University and Research,
P.O. Box 47, 6700 AA Wageningen, The Netherlands; [email protected] (H.B.);
[email protected] (L.K.)
2 Farming Systems Ecology Group, Wageningen University and Research, P.O. Box 430, 6700 AK Wageningen,
The Netherlands; [email protected]
3 Finnish Geospatial Research Institute, National Land Survey of Finland, Geodeetinrinne 1, 02430 Masala,
Finland; [email protected]
* Correspondence: [email protected]; Tel.: +31-317481604

Received: 12 December 2018; Accepted: 18 January 2019; Published: 22 January 2019 

Abstract: Assessment of disease incidence and severity at farm scale or in agronomic trials is
frequently performed based on visual crop inspection, which is a labor intensive task prone to errors
associated with its subjectivity. Therefore, alternative methods to relate disease incidence and severity
with changes in crop traits are of great interest. Optical imagery in the visible and near-infrared
(Vis-NIR) can potentially be used to detect changes in crop traits caused by pathogen development.
Also, cameras on-board of Unmanned Aerial Vehicles (UAVs) have flexible data collection capabilities
allowing adjustments considering the trade-off between data throughput and its resolution. However,
studies focusing on the use of UAV imagery to describe changes in crop traits related to disease
infection are still lacking. More specifically, evaluation of late blight (Phytophthora infestans) incidence
in potato concerning early discrimination of different disease severity levels has not been extensively
reported. In this article, the description of spectral changes related to the development of potato
late blight under low disease severity levels is performed using sub-decimeter UAV optical imagery.
The main objective was to evaluate the sensitivity of the data acquired regarding early changes in
crop traits related to disease incidence. For that, UAV images were acquired on four dates during
the growing season (from 37 to 78 days after planting), before and after late blight was detected in
the field. The spectral variability observed in each date was summarized using Simplex Volume
Maximization (SiVM), and its relationship with experimental treatments (different crop systems)
and disease severity levels (evaluated by visual assessment) was determined based on pixel-wise
log-likelihood ratio (LLR) calculation. Using this analytical framework it was possible to identify
considerable spectral changes related to late blight incidence in different treatments and also to disease
severity level as low as between 2.5 and 5.0% of affected leaf area. Comparison of disease incidence
and spectral information acquired using UAV (with 4–5 cm of spatial resolution) and ground-based
imagery (with 0.1–0.2 cm of spatial resolution) indicate that UAV data allowed identification of
patterns comparable to those described by ground-based images, despite some differences concerning
the distribution of affected areas detected within the sampling units and an attenuation in the signal
measured. Finally, although aggregated information at sampling unit level provided discriminative
potential for higher levels of disease development, focusing on spectral information related to disease
occurrence increased the discriminative potential of the data acquired.

Keywords: hyperspectral sensing; very high resolution imagery; disease assessment; crop monitoring

Remote Sens. 2019, 11, 224; doi:10.3390/rs11030224 www.mdpi.com/journal/remotesensing


Remote Sens. 2019, 11, 224 2 of 47

1. Introduction
Site-specific crop management and characterization of different cultivars in breeding trials
(i.e., phenotyping) are examples of tasks demanding the description of vegetation biochemical and
biophysical properties with high spatial and temporal resolution [1,2]. Recent advances in sensing
solutions and subsequent data analysis targeting these applications can provide alternatives to increase
feasibility of the detailed description of crop traits and plant response to stress [3–5]. In this context,
proximal or remote radiometric measurements of the vegetation canopy spectral response on discrete
wavelength intervals in the visible (Vis), near infrared (NIR), and shortwave infrared (SWIR) have
physically based relationships with leaf and canopy properties, and therefore have potential to be used
for spatially explicit estimation of crop traits.
More specifically concerning biotic stress monitoring, assessment of disease incidence and severity
frequently relies on visual rating, which is a time consuming activity susceptible to errors related to
several factors, such as the complexity of the disease symptoms presented by plants and the level of
experience of the professional performing the evaluation [6]. Alternative solutions for identifying
effects of pathogen development on crop traits based on radiometric measurements in the optical
domain have been introduced for spectra acquired at leaf and canopy levels [7–14]. Although methods
focusing on data acquired at leaf level demonstrate, in general, that a strong relationship between
spectral changes and disease development exists, the same is not always observed in studies based on
measurements acquired at canopy level. This fact is particularly true if discrimination between healthy
and diseased vegetation is intended to be made during early stages of the pathogen development
or under low disease severity levels [15]. As described by Behmann et al. [16], some aspects adding
complexity and decreasing accuracy of early assessment of disease effects on crop traits based on
spectral properties are: multiple factors simultaneously affecting the crop spectral response, besides
disease-related changes (e.g., effects of nutrient and water availability or natural plant senescence);
variability of canopy structure (e.g., leaf inclination), which together with changes in view-geometry
and illumination conditions may have considerable impact on canopy reflectance measurements, in
particular for data with very high spatial resolution; low signal-to-noise ratio for the spectra acquired;
and the fact that changes occurring due to early disease development are subtle (pre-visual), which
makes it difficult to obtain reference data (labels) at a more detailed scale than the plant level or without
being mixed with information corresponding to healthy tissue and background.
Despite these limitations, several authors have reported successful discrimination between healthy
and diseased crop patches and plants based on high resolution imagery acquired at canopy level by
sensors mounted on Unmanned Aerial Vehicles (UAVs) or other airborne platforms. Many of the
studies performed dedicate attention to the discrimination of diseased vegetation in perennial crops
(e.g., Huanglongbing in citrus, leafroll disease and Flavescence dorée in grapevine, verticilum wilt and
Xylella fastidiosa on olive trees, and red leaf blotch on almond orchards), using UAV-acquired multi-
or hyperspectral data in the Vis-NIR, frequently coupled with thermal imagery and measurements
of sun-induced fluorescence [13,14,17–22]. For annual crops, studies have also been conducted,
for example, on yellow rust and powdery mildew in wheat [23,24] and on downy mildew in opium
poppy plants [25] based on airborne or UAV multi- or hyperspectral imagery in the Vis-NIR-SWIR and
thermal infrared domains. In these studies, multiple features derived from the spectral information
acquired have been tested to assess the impacts of disease incidence on crop traits, such as reflectance
in single spectral bands, calculation of spectral distance metrics, derivation of vegetation indices,
and estimation of crop traits from spectral measurements based on radiative transfer model (RTM)
inversion. Usually, the features derived are subsequently used in parametric statistical analysis
(e.g., analysis of variance and groups means test) or in parametric or non-parametric modelling
frameworks for assessment of disease incidence or severity using classification or quantification
methods (e.g., linear and quadratic discriminant analysis, support vector machines, classification and
regression trees, etc.). With the variety of methods and features used, variable performance has also
been reported for the discrimination between healthy and diseased plants and for the quantification of
disease severity in the targeted areas. However, in most cases authors indicate that methods relying
Remote Sens. 2019, 11, 224 3 of 47

on optical imagery acquired at canopy level are sensitive enough to allow timely detection of disease
incidence or accurate quantification of its severity.
Besides UAVs or other airborne platforms, ground-based imaging systems have been evaluated
in studies concerning disease assessment based on optical data [26–30]. In this case, the authors also
focused on pathogens affecting different perennial and annual crops (e.g., Huanglongbing in citrus,
cercospora leaf spot in sugarbeet, tulip breaking virus, yellow rust and fusarium head blight in wheat
and barley) and tested several methods for discriminating diseased and healthy plants, or to quantify
disease severity, achieving variable discriminative potential and accuracy. In addition to airborne or
ground-based imaging system, a considerable number of studies have employed point-based spectral
readings (mostly hemispherical-conical reflectance measurements [31]) at canopy level to evaluate
the development of different pathogens, and similarly to other sensing approaches, reported variable
degrees of accuracy or discriminative potential for the information acquired [32–36].
Consequently, considerable evidence exists that canopy based spectral data may provide useful
information for discriminating between diseased and healthy plants or for assessing disease severity
due to the direct impact of pathogen development on biochemical and biophysical properties of
vegetation at leaf and canopy scales [22]. However, disease symptoms, resulting from pathogen
development and from plant response to infection, are to certain extent specific to each crop and
disease considered [9,10]. Therefore, not all results obtained in a specific context can be generalized
to others. Also, many studies performing data acquisition at canopy level do not include a detailed
description of the relationship between disease development and changes in crop spectral response,
or do not discuss the implications of these changes in the results obtained during discrimination of
healthy and diseased areas, or during modeling of disease severity. This fact may be attributed to
the lack of detail in the available datasets, mainly regarding spatial and spectral resolution or timely
assessment of disease development. For example, some studies [14,18,20,24] adopt plants or canopy
patches with up to 5–10% disease severity to characterize low pathogen incidence, which may be
a relatively high value for other crops or pathogens, depending on the management practice to be
implemented and on how early the detection need to be made.
Regarding late blight (Phytophthora infestans) incidence in potato (Solanum tuberosum), only a
few studies are available relating disease development and crop heathy monitoring based on UAV
imagery or other spectral datasets acquired at canopy level. Considering the importance of late blight
assessment for potato management, as emphasized by Cooke et al. [37], this topic is certainly of
interest. Sugiura et al. [38] presented an approach for assessing late blight severity using UAV optical
imagery. This method involves RGB image color transformation and pixel-wise classification based
on a threshold optimization procedure. Results obtained by these authors are relatively accurate,
with reported R2 between the area under the disease progress curve estimated visually and by the
image-based approach, varying between 0.73 and 0.77. Duarte-Carvajalino et al. [39] performed
machine learning-based estimation of late blight severity using very high resolution imagery acquired
over the growing season, with a modified camera registering blue, red, and NIR bands. Despite using
considerably different prediction approaches in comparison with that described by Sugiura et al. [38],
the performance reported was similar in both studies. Other authors only described qualitative
evaluation of late blight incidence using UAV multispectral imagery [40,41] or only assessed effects
of advanced stages of the pathogen development on potato traits and crop spectral response using
data acquired by an hyperspectral imaging system [42,43]. However, studies focusing on UAV or
other sources of very high resolution imagery (with sub-decimeter resolution), including validation by
means of ground truth data (e.g., measurement of crop traits and assessment of disease severity, etc.),
and aiming a detailed description of spectral changes related to early pathogen incidence have not
been made so far for potato infection with late blight.
Therefore, the objectives of the present study can be summarized as follows: (i) identify changes
on the potato canopy reflectance in the optical domain related to late blight development, in different
organic cropping systems (i.e., cultivation with a single cultivar in contrast to a mixture of different
cultivars); (ii) compare alterations in the spectra observed using ground-based imagery (pixel size
Remote Sens. 2019, 11, 224 4 of 47
Remote Sens. 2018, 10, x FOR PEER REVIEW 4 of 46

between
organic0.1 cropping
and 0.2 cm) with(i.e.,
systems those detected with
cultivation through UAV
a single data (pixel
cultivar size between
in contrast 4 and
to a mixture of 5different
cm); and (iii)
evaluate
cultivars); (ii) compare alterations in the spectra observed using ground-based imagery (pixel sizelevels
the potential of UAV imagery for early discrimination of different late blight severity
between
in potato, in 0.1 and 0.2 cm)
particular with those
identifying detecteddetectable
possible through UAV data (pixel
changes in thesize between
canopy 4 and 5 cm);
reflectance dueand
to early
(iii) development
disease evaluate the potential of UAV imageryimagery.
using sub-decimeter for early discrimination of different and
For that, ground-based late blight
UAV severity
images were
levels using
analyzed in potato, in particularanalytical
an up-to-date identifyingframework,
possible detectable changes
involving in theVolume
Simplex canopy reflectance
Maximization due to
(SiVM)
early disease development using sub-decimeter imagery. For that, ground-based and UAV images
and pixel-wise log-likelihood ratio (LLR) calculation, as similarly performed by other authors at leaf
were analyzed using an up-to-date analytical framework, involving Simplex Volume Maximization
and canopy levels [9,44], in order to provide a sound basis for conclusions regarding the objectives of
(SiVM) and pixel-wise log-likelihood ratio (LLR) calculation, as similarly performed by other authors
this research.
at leaf and canopy levels [9,44], in order to provide a sound basis for conclusions regarding the
objectives of this research.
2. Materials and Methods
2. Materials and Methods
2.1. Study Area and Experimental Set-Up
2.1.
TheStudy
dataArea and Experimental
acquisition Set-Up
was realized during the spring and summer of 2016 in an organic
strip-cropping ◦ N, 5.66332◦ E; WGS84) started in 2014 at the Droevendaal
The dataexperiment (51.9917
acquisition was realized during the spring and summer of 2016 in an organic strip-
experimental farm of the
cropping experiment Wageningen
(51.9917°N, University,
5.66332°E; The Netherlands.
WGS84) started In this site,experimental
in 2014 at the Droevendaal plots cultivated
withfarm of the
potato Wageningen
were followedUniversity, The Netherlands.
mainly focusing In this site,of
on the assessment plots
latecultivated with potato were
blight (Phytophthora infestans)
followed mainly focusing on the assessment of late blight (Phytophthora infestans) development
development and general crop healthy status. Twelve plots, measuring 3 by 10 m (small plots), were and
general crop healthy status. Twelve plots, measuring 3 by 10 m (small plots), were established in a
established in a strip along the field (Figure 1), while buffer areas, measuring 3 by 5 m, were placed
strip along the field (Figure 1), while buffer areas, measuring 3 by 5 m, were placed before and after
before and after each plot, in the same strip, in order to avoid border effects between plots. The same
each plot, in the same strip, in order to avoid border effects between plots. The same experimental
experimental configuration, but with larger plots (with 6 by 10 m, and buffer areas with 6 by 5 m),
configuration, but with larger plots (with 6 by 10 m, and buffer areas with 6 by 5 m), was repeated in
was arepeated
neighboringafield.
in neighboring field.

Figure 1. Distribution of experimental plots and treatments (T1, non-mixed system; T2, mixed system)
in the study site. Figures correspond to false color composite (735, 631, and 609 nm as RGB bands) for
UAV imagery acquired 37 (a), 50 (b), 64 (c), and 78 (d) days after planting (DAP). White boundaries
indicate small and large experimental plots. Original experimental arrangement is indicated by black
connectors and new blocks used for treatments comparison (as described in Section 2.6) are indicated
in blue.
Remote Sens. 2019, 11, 224 5 of 47

Two different treatments were compared within the experiment: (a) plots in which a single
cultivar, susceptible to late blight (“Raja”), was planted (non-mixed system, T1); and (b) plots in which
a mixture of three cultivars (“Raja”, “Connect”, and “Carolus”) with different degrees of resistance
(from low to high, respectively) to late blight were iterated in each crop row (mixed system, T2). It is
worth drawing attention to the fact that in T2, the mixture was made systematically in each row,
iterating the different cultivars during the planting operation, as already mentioned. Considering the
treatments applied to the experimental plots, the minimum comparable area between plots, besides
individual plants, corresponded to sampling units, including three consecutive plants arranged in the
same row. Based on that, each plot was divided in multiple rectangular patches measuring 0.75 by 1 m,
hereafter referred as sampling units (SUs). These SUs were used during ground truth measurements
and also to extract spectral information from UAV imagery.
Crop traits (leaf chlorophyll content and canopy height) were measured (as described in
Section 2.6) in a selection of SUs within each plot in parallel, with acquisitions of UAV and ground-
based spectral imagery (Sections 2.2 and 2.3). Late blight occurrence and severity was visually assessed
every 3 to 5 days after the first symptoms of the disease were detected, following the methodology
described by the European and Mediterranean Plant Protection Organization [45]. For late blight
assessment, four fixed sample units in the small plots (one per crop row) and one sample unit in the
large plots were followed during the growing season. Also, six extra SUs were randomly chosen in
each small experimental plot for disease assessment after late blight was first observed in the field, in
order to better describe intra-plot variability. The final scores obtained at sampling unit level were
summarized in 13 disease severity classes, as follows (bounded in relation to the previous class):
healthy (no disease observed), ≤1.0%, ≤2.5%, ≤5.0%, ≤7.0%, ≤10.0%, ≤15.0%, ≤25.0%, ≤50.0%,
≤75.0%, ≤90.0%, ≤97.5%, and >97.5% of disease severity. It is worth noting that low disease severity,
i.e., below 10% of leaf area affected, was assessed counting lesions observed in the leaves, while higher
severity levels were estimated directly by visual evaluation of the percentage of leaf area affected,
as recommended by the methodology adopted [45]. Scores were given for each plant and final severity
class for a given sampling unit was calculated taking their average, after transforming the scores to
area of affected tissue by area of sampling unit surface.
The experiment followed a generalized randomized block design, with three blocks and two
replicates for each treatment (i.e., cultivation systems) in each block (Figure 1). From the three blocks
included in the experimental site, only the first two were followed in this study (i.e., eight experimental
plots), due to legal restrictions concerning the data acquisition (i.e., UAV flights) in the area of study.
Analysis of the data acquired was adapted to these restrictions as described in Sections 2.6–2.8.

2.2. UAV Optical Imagery with Sub-Decimeter Resolution


Images were acquired on four dates during the growing season (Table 1) using a lightweight
hyperspectral frame camera (Rikola Ltd., Oulu, Finland) on board of a UAV platform in order to follow
the dynamics of crop and disease development over time. The camera used is based on a Fabry-Perot
interferometer (FPI) [46] and was programmatically configured to register 16 narrow bands between
600 and 900 nm (Figure 2). These bands were chosen due to their importance to describe changes in
biochemical (leaf chlorophyll content) and biophysical (e.g., leaf area index, ground cover, etc.) traits
of vegetation at leaf and canopy levels [47].
shoots, 3 = main stem elongation, 4 = tuber formation, 5 = inflorescence emerging, 6 = flowering, 7 = fruit
development, 8 = ripening of fruit and seed, 9 = senescence. 3 I = crop traits (leaf chlorophyll content, canopy
height), II = canopy spectra acquired with camera in handheld mode for a selection of SUs, III = late blight
severity assessment. 4 Sunny illumination conditions corresponds to clear sky while cloudy indicates partially
overcast
Remote conditions
Sens. .
2019, 11, 224 6 of 47

Figure 2. Specifications of the data acquired with the hyperspectral imaging system mounted on the
Figure 2. Specifications of the data acquired with the hyperspectral imaging system mounted on the
UAV platform (red boxes, 16 spectral bands) and on handheld configuration (green boxes, 31 spectral
UAV platform (red boxes, 16 spectral bands) and on handheld configuration (green boxes, 31 spectral
bands; Section 2.3). Center line in each box indicate spectral band center and extremities for the full
bands; section 2.3). Center line in each box indicate spectral band center and extremities for the full
width at half maximum for each band (FWHM; varying between 13 and 21 nm for UAV data and
width at half maximum for each band (FWHM; varying between 13 and 21 nm for UAV data and
between 13 and 23 nm for ground-based images).
between 13 and 23 nm for ground-based images).
Due to intrinsic characteristics of the FPI system used, images corresponding to different
wavelengths were acquired sequentially, since changes in the wavelengths measured depended
on internal camera adjustments. Consequently, a mismatch between images corresponding to different
bands in a given data-cube occurred, an issue solved during photogrammetric processing with a
dedicated software (PhotoScan version 1.3, AgiSoft LLC, St. Petersburg, Russia). This procedure
relied on the implementation of Structure from Motion (SfM) algorithm, with feature matching and
self-calibrating bundle adjustment [48]. During image alignment and derivation of dense point clouds,
imagery with full resolution was used (i.e., setting quality to “high” and “ultra-high” for these steps in
the software processing chain, respectively). Optimization of retrieved camera position and orientation
for each scene was performed based on 4 to 8 ground control points (depending on the acquisition
date, Table 1), with coordinates registered using a RTK-GPS. Before the optimization step, sparse
point clouds were filtered based on residuals and reconstruction uncertainty (10% of points with the
largest values were removed in each case), as performed by Honkavaara et al. [49]. Dense point cloud
depth filtering was set to “mild” to preserve details in the final 3D reconstruction of the crop surface.
Considering the approximate flight height of 80 m, a ground sampling distance between 0.04 and
0.05 m was achieved in the final orthorectified images.

Table 1. General aspects related to the data acquisition using the Unmanned Aerial Vehicle (UAV)
platform and a ground-based sensing setup (Section 2.3). Days after planting (DAP), estimate general
crop growth stage according to the BBCH (‘Biologische Bundesanstalt, Bundessortenamt and CHemical
industry’) scale [50], illumination conditions, and number (nbr.) of ground control points (GCPs) used
during photogrammetric processing.

Growth Ground Integration Nbr. of


Date 1 DAP Illum. 4
Stage 2 Data 3 Time (ms) GCPs
26/05 37 2–4 I, II Sunny 10 4
08/06 50 4–6 I, II Sunny 10 4
22/06 64 6–7 I, II, III Cloudy 20 8
06/07 78 7–8 I, II, III Sunny 10 7
1 Flights were realized between 10:00 h and 13:00 h (GTM+2) to minimize angular effects of incident radiance on the
measurements. 2 BBCH scale summary: 0 = germination, 1 = leaf development, 2 = formation of basal side shoots,
3 = main stem elongation, 4 = tuber formation, 5 = inflorescence emerging, 6 = flowering, 7 = fruit development,
8 = ripening of fruit and seed, 9 = senescence. 3 I = crop traits (leaf chlorophyll content, canopy height), II = canopy
spectra acquired with camera in handheld mode for a selection of SUs, III = late blight severity assessment. 4 Sunny
illumination conditions corresponds to clear sky while cloudy indicates partially overcast conditions.
Remote Sens. 2019, 11, 224 7 of 47

Conversion of digital numbers (registered with 12-bit radiometric resolution) to radiance, in W/m2
sr nm, was performed using camera manufacturer’s proprietary software (HyperspectralImager, v2.0,
Rikola Ltd., Oulu, Finland). This step included the correction for dark current using images taken with
the sensor lens completely covered (dark reference), acquired before each flight, and for flat field using
factory calibration parameters. Radiance was converted into reflectance factor through the empirical
line method using images of a Spectralon reference panel with nominal 50% reflectance (LabSphere
Inc., North Sutton, NH, USA). These images were taken immediately before flight under the same
general illumination conditions observed during data acquisition.
The orthomosaics used to extract the radiometric information corresponding to the monitored
areas were derived taking the average of all reflectance values measured in a given location (pixel).
This product aggregated spectral information from different images, with variable angular properties
due to the combination of UAV movement, overlaps between the images acquired and relatively large
camera field of view. The reason to derive such a product was to mitigate the impact of angular effects
on the radiometric data used to characterize the monitored patches along the field, in agreement with
results obtained by Aasen and Bolten [3]. As introduced in Section 2.1, spectral data was extracted
within the experimental plots for a given number of SUs in each date. Before late blight development
onset, four SUs in each small plot were considered during spectral data extraction, while after the
disease was first detected (i.e., after 64 days after planting—DAP), 10 SUs per small plot were used.
Large experimental plots had only one sampling unit per plot followed throughout the growing season
that were included in the analysis.

2.3. Ground-Based Optical Imagery with Sub-Centimeter Resolution


Besides UAV data, images were acquired at the ground level using the same FPI camera system
(Section 2.2). This allowed a better description of the disease development on plants and leaves
through images with increased spatial (pixel size between 0.1 and 0.2 cm) and spectral resolution from
a perspective comparable to the UAV. Data acquisition with the handheld configuration of the FPI
sensor was made after UAV flights were performed (Table 1). Images were taken in one sampling unit
within each small experimental plot in the field (Figure 1). Illumination conditions constrained data
collection on the third date (64 DAP) and only the first block of the experiment (i.e., first four plots)
was imaged, but in this case two images per plot were acquired. The complete dataset obtained with
the camera in this configuration comprises 32 data-cubes with 31 spectral bands and measurements
from 500 to 900 nm, as described in Figure 2.
The first step for processing the ground-based images was to transform raw digital numbers to
radiance, as described in Section 2.2. After that, correction for camera lens distortion was performed
using an external reference pattern, following the approach described by Zhang [51]. Camera intrinsic
parameters and lens radial and tangential distortion (represented by three and two coefficients,
respectively) were estimated in order to improve geometrical representation of the area covered by
the images.
Due to the characteristics of the FPI sensor system described in Section 2.2, the different
bands measured using the camera in handheld configuration were not recorded simultaneously.
Consequently, despite the fixed position of the camera during data acquisition (pointing to the center
of the sampling unit at approximately one meter from the top of the canopy and oriented towards
the orthogonal to the sun principal plane), small movements of the camera and crop canopy were
still noticeable between bands of the same data-cube. In this case, band-to-band co-registration
was performed to correct possible positioning mismatches between data corresponding to different
wavelengths. For that, an area-based registration approach was implemented within the framework
proposed by Lowekamp et al. [52]. In preparation for the band’s co-registration, the spectral dataset
was divided in three subsets (503–660, 672–750, and 763–893 nm) and one reference band was selected
for each one of them (620, 724, and 803 nm, respectively). After that, all bands were registered to
the reference band in their respective subset. The reference bands were chosen using as criteria their
Remote Sens. 2019, 11, 224 8 of 47

intermediary position between the main expected spectral changes in the vegetation spectral response
occurring in the wavelengths included in each subset. Although more objective approaches have been
proposed to divide data-cubes in representative spectral regions and to assign reference bands for
band-to-band registration [53] in similar contexts, the simplified approach implemented here allowed
selection of a relevant reference band sharing spectral similarities with all bands included in a given
subset. The final alignment between bands was obtained registering the first and last subsets to
the central set. This was achieved based on the transformation calculated for the nearest spectral
band on a subset to the bands in the central set. In all cases (i.e., band-to-band alignment within and
between subsets), two types of transformations were calculated, a rigid affine transform and a non-rigid
displacement field transform. Mutual information was used as a metric to optimize the transformations
applied [54]. Fine tuning of the registration parameters (e.g., configuration of multi-level registration
framework and gradient descent line search algorithm) was based on assessment made using a small
sample taken from the dataset before application on all images. Evaluation of the final registration
accuracy was performed using point based automatic feature extraction (SIFT; [55]) and matching
(FLANN; [56]), due to unavailability of control points in the imaged areas that could be used for this
purpose. Registration accuracy of the transformations was evaluated by matching features in each
spectral band with features detected in the reference band of their respective spectral subset. Root
mean squared error (RMSE) was calculated considering all the matched points for a given band, while
final RMSE reported corresponded to the average RMSE for a given spectral subset and transformation
used. A final assessment was made comparing features in the closest band in each set to those in
the reference band of the central set, in order to estimate the final alignment quality between subsets
(Table A1).

2.4. Estimation of Ground Cover for Background Removal from Ground-Based Imagery
Ground cover was estimated for images acquired with the FPI-based camera on handheld mode
using a straightforward approach, similar to that implemented by Behmann et al. [16]. First, the spectra
of all images acquired in each acquisition date was subject to dimensionality reduction through linear
Principal Components Analysis (PCA) in order to mitigate information redundancy and potentially
enhance distinction between vegetation and soil present in the imaged areas. Features derived using
PCA were then used as inputs to unsupervised segmentation through Gaussian Mixtures Modelling
(GMM), which was separately applied to each data-cube. A suitable number of clusters to retain in
each case was selected based on the gap statistic [57], considering up to 20 classes and selecting the
segmentation providing the maximal value for this parameter. Binary classification of each cluster as
background or vegetation was made using a logistic classification model trained, validated, and tested
using pixel-wise manually labelled data. For that, nine regions of interest (ROI) were randomly
selected to be manually labelled within each one of the 32 ground images. These ROIs had dimensions
corresponding to the ground sampling distance estimated for UAV images acquired in the same date.
This allowed derivation of a dataset taking into account resolution obtained in both cases, anticipating
a posterior use of the outputs obtained during ground-based background removal for background
effect mitigation on UAV images (Section 2.5). Ground-based imagery resolution was estimated to be
approximately between 1.17 to 1.59 mm and an approximate factor of 30 to 40 existed in relation to
UAV images. Consequently, each ROI comprised a squared area with approximately 30 to 40 pixels
length in each side, depending on the acquisition date and sampling unit considered. Half of the
manually labelled data was used for model training and validation, while the other half was used as
test dataset in order to verify if the models obtained had a stable performance. In this case, accuracy
was evaluated through the area under the precision versus recall curve, with values above 0.9 obtained
during test for all dates. As input for the logistic classification models, a selection of vegetation indices
(VIs) was tested (Table A2). In order to select only VIs contributing effectively to the distinction
between bare soil and vegetation, the logistic model was coupled with elastic net regularization [58].
Final prediction of each cluster class was made based on the pixel-wise probability estimated using
Remote Sens. 2019, 11, 224 9 of 47

the logistic model obtained for each date. Clusters with more than 50% probability for the vegetation
class were retained as foreground. In this case, instead of using the average or median probability
estimate for pixels in a given cluster, different percentiles were calculated (from 0 to 100% in intervals
of 0.5%) and the cluster class was assigned as vegetation if the probability was greater than 50% for the
considered percentile. The most suitable percentile for each date, as well as the accuracy of the overall
segmentation process (Table A3), was obtained by calculating the RMSE of ground cover prediction in
comparison with manually labelled data in the calibration and test datasets (i.e., comparing number of
pixels labelled as vegetation at ROI level with classification outputs after cluster-wise prediction).

2.5. Mitigation of Background Effects on UAV Imagery Using Vegetation Index Threshold
Spectra corresponding to pixels with predominant bare soil fraction or most affected by spectral
mixing of the UAV images were removed based on outputs of the segmentation analysis performed
for the ground level data (Section 2.4). For that, ground-based images were registered to UAV image
patches, manually selecting one corresponding control point between both datasets for each sampling
unit, as well as manually locating the central crop row in each image and determining its direction.
This information allowed performance of ground and UAV image rotation to a common axis (i.e., crop
row), and translation of the ground images to their approximate location in the field based on the
correspondence with the UAV images. After the image-to-image registration, a selection of narrow and
broad band Vis, including bands between 600 and 900 nm (Table A2), were evaluated for background
removal in the UAV images. A threshold for each VI was defined by finding the value providing the
most similar segmentation results (i.e., estimate of vegetation cover) to that derived using ground
images with full spatial and spectral resolution.
Adjustment of VI thresholds was performed using half of the SUs imaged at ground level in
each date (n = 4), while the generalization potential for the values obtained was assessed based on
validation performed with independent data (i.e., applying the thresholds obtained to the other half of
the dataset). The final VI used for background removal in UAV images was chosen based on the lowest
average RMSE for the validation dataset, considering all dates (Figure A1). Therefore, it was expected
that the selected VI and its corresponding thresholds had considerable generalization potential and
stability concerning changes in illumination, view-geometry, and background characteristics, despite
the small sample set size used for threshold optimization in each date. This approach was based on the
method adopted by Jay et al. [59] for background removal applied to ground-based imagery. The final
optimal thresholds by VI and acquisition date can be found in Table A4. OSAVI was finally selected
for background removal due to its frequent use in other studies, instead of NDVI*SR, which provided
slightly lower RMSE values on the overall evaluation (Figure A1).

2.6. Measurements of Crop Traits and Treatments Comprison Based on Linear Mixed Effects Models
Crop traits were measured at each acquisition date to assess direct impacts of disease severity
on plant biophysical and biochemical properties. Besides, these measurements allowed evaluation of
general differences between treatments that could be potentially related not only to disease incidence
but also to cultivar intrinsic characteristics. In addition to the estimation of ground cover (Section 2.5),
leaf chlorophyll content and canopy height were measured. Leaf chlorophyll content was derived based
on SPAD (Soil Plant Analysis Development; [60]) meter readings (SPAD-502; Minolta Corporation Ltd.,
Osaka, Japan). In this case, one measurement was made per plant in each sampling unit selected for
data acquisition. This measurement was made on one leaflet of the most developed leaf of the plant,
totaling three readings in each sampling unit.
Conversion from SPAD units to chlorophyll content per leaf surface area (µg·cm−2 ) was performed
based on the equation provided by Uddling et al. [61], and the average for three measurements made
within each sampling unit was the final leaf chlorophyll content evaluated. Similarly, canopy height
was measured from the potato ridge to the highest leaf in each plant and the average represented
the final values used to describe the canopy in each sampling unit. It is worth noting that since leaf
Remote Sens. 2019, 11, 224 10 of 47

chlorophyll content and canopy height were measured in three SUs within each small experimental
plot (Figure 1), ground cover was estimated considering the same SUs for all data acquisitions (i.e., from
37 to 78 DAP). Only SUs in the small plots were considered for treatment comparison. It is worth
reminding that a more complete dataset (as indicated in Section 2.2 and comprising observations
made in the small and large plots) was used to evaluate spectral changes over the growing season
(as described in Sections 2.7 and 2.8).
Besides the crop traits listed above, values of Weighted Difference Vegetation Index (WDVI)
were used as proxy to leaf area index (LAI), considering its strong relationship with this specific
crop trait [62]. This approach was necessary to allow a better interpretation of spectral changes
potentially related to the crop canopy structure. Therefore, in this manuscript reference to levels
of LAI are made based purely on the spectral data, assuming that other canopy traits, such as leaf
angle distribution, had lower impact on the results observed in comparison with changes related to
LAI. This way, consideration related to canopy structure refers mainly to changes in LAI, except if
indicated otherwise.
The crop traits, together with vegetation indices providing the greatest discriminative potential
between treatments in the last data acquisition, were used for assessing whether differences between
treatments were significant in each acquisition date. For ranking vegetation indices according to their
discriminative potential, logistic regression was used to classify all pixels within the measured SUs
as corresponding to a given treatment (i.e., non-mixed or mixed systems). C-statistic [63,64] values
were then calculated from the classification outputs for each VI and used for evaluation (larger values
corresponding to greater discriminative potential).
Statistical significance for differences observed between treatments were evaluated based on
linear mixed effects models (LMMs; [65]) fitted separately to each response variable, comprising traits
and VIs. As covariates used for modelling the variables of interest, treatment and acquisition date
together with their interaction were included as fixed effects, while experimental block and sampling
unit (SUs) ID (nested within block) were adopted as random effects in each model. Three blocks were
considered in the analysis after rearranging the initial experimental set-up by excluding two plots
from the last of the original blocks in the field (i.e., last experimental plot for each treatment, shown in
Figure 1). This rearrangement was made in order to better represent the potential variability existent
in the field (by dividing the area to be analyzed in a larger number of blocks), since the complete
experiment could not be sampled. Temporal trends within the groups considered (SUs ID nested
within blocks) were represented through an autocorrelation structure of order 1. Treatment comparison
for each acquisition date was performed deriving marginal means for the models using contrasts
calculated by the package emmeans [66] in R, while the LMMs were fitted using the nlme [67] package
in the same environment. Selection of fixed and random effects for consideration (LMMs structure)
and evaluation of model assumptions was made based on Akaike information criterion (AIC) and on
residuals derived for each model.

2.7. Descrition of Crop Canopy Spectral Variability through Simplex Volume Maximization (SiVM)
The effects of late blight incidence on the potato canopy reflectance, related to pathogen
development and plant defense response, were assessed using the matrix factorization approach
described by Thurau et al. [68,69]. This method has been successfully applied in studies aiming to
relate changes observed on the canopy reflectance with effects of water stress and disease infection in
plants grown under laboratory or field conditions [8,11,44]. The first step of this approach consists of
deriving a series of archetypal spectral signatures from the complete dataset comprising spectra of
healthy and stressed plants in order to describe the variability observed. These archetypes are derived
through convex constrained non-negative matrix factorization, and consequently, correspond to real
measurements within the dataset (e.g., pixels from the images acquired). The general optimization
problem targeted can be described as the minimization of the Frobenius norm between the original
dataset and the reconstructed data through the matrix W of base vectors and the coefficients matrix
Remote Sens. 2019, 11, 224 11 of 47

H [8]. This is obtained during analysis by retaining only bases that contribute the most to maximize
the volume of the simplex described by the vectors included in W. As a computing efficient alternative
to this procedure, Thurau et al. [68,69] proposed the so called Simplex Volume Maximization (SiVM)
method, which relies on distance geometry rather than on the simplex volume itself [8,70]. The matrix
H of coefficients is obtained through constrained quadratic optimization and describes the optimal
abundance of each component W for reconstruction of the spectral signatures being modelled.
In this study, the number of archetypes retained in W to represent the ground-based and
UAV–borne datasets to be analyzed was set empirically to 25, as performed by Wahabzada et al. [8]
and Thomas et al. [11]. It was verified that this number of bases provided good reconstruction
accuracy while avoiding the oversampling of the feature space, which may be beneficial considering
reconstruction accuracy [71], but might bring problems during further analysis due to the so called
“curse of dimensionality” [72]. Each dataset (i.e., ground-based and UAV images) for a given acquisition
date was analyzed separately to mitigate impacts of changes in view of geometry and illumination
conditions over time on analysis outputs. As an example, archetypes extracted for the ground-based
and UAV datasets acquired 78 DAP are presented in Figure A2. Two pixels were chosen in a UAV
image patch acquired on that date. Weights derived for reconstructing these pixel spectra from the
archetypes are illustrated for both datasets (UAV- and ground-based images).
The only difference between the steps involved in the application of SiVM to ground and UAV
data was related to the inputs used for the selection of archetypes. While for UAV images all spectra
extracted from the monitored areas were considered, for the ground-based data a pre-selection
of spectra was performed based on the outputs of the segmentation (GMM) performed during
background removal (Section 2.4). For that, the spectral angle [73] between each spectrum in a given
cluster and its “average spectra” (i.e., band-wise reflectance average) was calculated, and spectral
signatures corresponding to the 1st, 25th, 50th, 75th, and 99th percentiles of spectral angle distance
to the average were selected. The cluster-wise pre-selected spectra for all images were then used
as input for the SiVM-based extraction of 25 archetypes, which finally represented the variability
of ground-based spectral data acquired on a specific date. Furthermore, Standard Normal Variate
(SNV) [74] transform was applied to the ground-based spectral dataset before SiVM implementation,
in order to minimize effects of illumination changes within the canopy on the analysis [75].
After the chosen number of bases was derived (W), the abundance coefficients (H) obtained
for pixels from different treatments (i.e., non-mixed and mixed cultivation systems) or disease
severity classes were used to estimate a probability density distribution for each group. In this
study, the multivariate Dirichlet distribution was adopted and its parameters were estimated by
maximum likelihood. With the probability distribution estimated, pixels in the image were mapped
according to specific treatment or severity class based on the Bayes factor (i.e., log-likelihood ratio, in
this case, LLR, since no prior information on the data distribution was taken into account), as described
by Wahabzada et al. [8]. This mapping was performed comparing the difference between the logarithm
of probabilities, for coefficients h corresponding to each spectral signature (i.e., pixel-wise comparison),
within distributions from different treatments or severity classes considered.
The description provided in this section is a simplified overview of the methodology implemented.
For a more comprehensive explanation, including mathematical formulation and notation used,
we refer to Thurau et al. [68,69], Wahabzada et al. [8], and Kersting et al. [70]. For most of the steps
described in this section, open source libraries in Python were used, notably PyMF, to implement
SiVM, and Dirichlet MLE, to estimate parameters for Dirichlet distributions.

2.8. Pixel-wise Comparison Framework to Identify Relevant Spectral Information Assciatedto Late
Blight Development
For assessing the impact of disease development on ground-based and UAV data, two different
approaches were adopted. First, considering the reduced number of images acquired at ground level
only comparison between treatments (“non-mixed” and “mixed” cultivation systems, T1 and T2) was
Remote Sens. 2019, 11, 224 12 of 47

possible. In this case, all ground-based images acquired on a given date (n = 8) had their pixel-wise
probability estimated considering distributions of coefficient H for T1 and T2. These probabilities were
then compared using the LLR, as described in Section 2.7. Probability distribution corresponding to
T1 was considered as hypothesis H1 (“group of interest”) and compared to the null hypothesis H0
(“reference”) represented by probability estimated for T2, since healthier plants were expected to be
observed in this treatment. A parallel was made between results obtained for ground data and those
observed for UAV-based imagery. However, the latter case comprised considerably more observations
(as described in Sections 2.2 and 2.3).
Besides the comparison at treatment level, more specific evaluation was made for each disease
severity class observed using data acquired by the UAV platform in the last two acquisitions (64 and
78 DAP). In this case, the healthiest observations were adopted as null hypotheses (H0), during LLR
calculation, for comparison with each severity class (classes with at least 10 observations, Table A5).
Also in this case, only observations from a single treatment were used to represent a given severity
class. Consequently, comparison between hypotheses H0 and H1 could be made within the same
treatment for observation with relatively low disease severity levels (below 5.0% severity). For that,
from observations made 64 DAP, only those from T1 were evaluated on this phase. In addition, for data
acquired 78 DAP, two SUs from each treatment were eliminated from the analysis (as indicated in
Table A5).
To summarize, differences detected during the comparisons, LLR values for SUs from the classes
of interest (H1) were grouped in discrete intervals (e.g., from 0 to 15, in steps of 0.5). This allowed
representation of the gradual spectral changes associated with the increase in the association between
spectral information and the specific group considered. In the case of treatments comparison, only SUs
cultivated with T1 were taken into account to summarize the differences between spectral information
relatively weakly associated with T1 (i.e., LLR values below 0.5) and that with stronger relation with
this treatment (i.e., higher LLR values). Similarly, when focusing on the evaluation of different disease
severity classes only image patches corresponding to SUs scored within the specific class of interest
were evaluated.
To further facilitate the interpretation of the outputs, the ratio between the average reflectance
for each LLR interval and that corresponding to the first interval (i.e., lowest LLR values observed)
was derived. This calculation was expressed in percentage and referred as “ratio”, as performed by
Naidu et al. [76]. Besides this metric, the difference between the percentage of observations (number
of pixels divided by the total) within a given LLR interval for the group of interest was subtracted
from the percentage of observations in the same LLR interval for the reference group, a parameter
referred as delta (∆). In this case, the metric derived allowed visualization of proportional changes
in the frequency of LLR values observed within the discrete intervals, considering the treatment or
disease severity classes of interest in relation to the reference. Also, the cumulative absolute delta
(delta total—∆t; i.e., sum of delta values) was calculated to give a general overview concerning the
changes in LLR value distribution between treatments or between a given disease severity class and
the healthier reference adopted.

3. Results

3.1. Evaluation of Crop Traits and Disease Severity over the Growing Season
The crop development during the period evaluated was characterized by continuous decrease
in leaf chlorophyll content coupled with a general increase in canopy height and ground cover
(Figure 3a–c). Differences between treatments occurred mainly later than 64 days after planting (DAP),
with higher leaf chlorophyll content, canopy height, and ground cover observed on the mixed cropping
system (T2). Despite the higher values of leaf chlorophyll and of traits associated with canopy structure
on T2 later in the growing season, lower values of these traits were observed for plants in this treatment
during early crop development (i.e., first acquisition, 37 DAP), indicating a smaller initial growth rate
for T2.
and the measured canopy properties (i.e., crop height and ground cover), in the initial evaluations
(i.e., 37 and 50 DAP) the trend observed for WDVI is in part the opposite of that described by these
crop traits (Figure 3, b–c). This indicates that ground cover and canopy height might not have been
sufficient to describe all differences between treatments regarding canopy architecture (i.e., other
canopy traits, such as LAI, varied between treatments), in particular for these first two data
Remote Sens. 2019, 11, 224
acquisitions. 13 of 47

(a) (b) (c)

(d) (e) (f)

Figure Leaf
3. 3.
Figure Leafchlorophyll
chlorophyllcontent
content(a),(a), canopy
canopy height (b), ground
height (b), groundcover
cover(c),
(c),and
andvegetation
vegetation indices
indices
(Table A2),
(Table namely,
A2), namely,CIre (Chlorophyll
CIre (ChlorophyllIndex Indexredred edge, d), REP
edge, d), (Red Edge
REP (Red EdgePosition,
Position,e),e),and
and WDVI
WDVI
(Weighted
(Weighted Difference
Difference Vegetation
VegetationIndex,
Index,f)f)derived
derived from UAV imagery,
from UAV imagery,describing
describingcrop
cropgrowth
growth under
under
different cultivation
different cultivationsystems
systems(T1(T1and
andT2,T2,“non-mixed”
“non-mixed” and “mixed”
“mixed”treatments,
treatments,respectively).
respectively). Points
Points
indicate within
indicate withinplot
plotmeasurements
measurements(n (n == 33 sampling units per
sampling units per plot),
plot),while
whileeach
eachcross
crossrepresent
represent thethe
average at plot level. Lines connect the
average at plot level. Lines connect the average average for each treatment over time. Numbers
treatment over time. Numbers in blue in blue
correspond
correspondtotothe thep-value
p-valueforforeach
eachacquisition
acquisition date. Asteriskscommunicate
date. Asterisks communicatethe thesame
samep-values,
p-values,
indicating
indicating contrasts
contrasts significantatat0.05
significant 0.05(*),
(*),0.01
0.01(**),
(**), and
and 0.001 (***).
(***).

Contrasts
Changesbetween
on ground treatments
cover overwere
timenot significantunits
for sampling for (SUs)
chlorophyll
imagedcontent, while beingand
using ground-based more
UAV sensors
frequently duringfor
significant thetraits
growing season
related are presented
to canopy in Figure
structure (FigureA4. Values
3a–c). In of
allvegetation index
cases, differences
(OSAVI)were
observed in both cases agree
generally withand
larger results
morepresented in Figure
significant in the3 last
(d–f)two
and evaluation
Figure A3 (d), with(i.e.,
dates a general
64 and
78 DAP). Differences observed for vegetation indices with relatively good discriminative potential
for the different treatments (according to results presented in Table A6) followed similar patterns to
those described by the crop traits (Figure 3d–f). It is worth noting that CIre (Chlorophyll Index red
edge, Table A2) was selected due to its good discriminative potential for data corresponding to the last
two acquisitions, while having comparatively lower discriminative potential for the first acquisitions,
a desirable feature in the application evaluated in this research since late blight was not observed in
the field during the initial data collections.
The relationship between crop traits and spectral information (vegetation indices) is further
explored in Figure A3. The indices selected (CIre and REP) were both affected by changes in leaf
chemistry and canopy structure and were not able to indicate alterations strictly associated to a single
crop property. Besides CIre and REP, results for WDVI were also evaluated in the Figure 3f due to its
close relationship with leaf area per unit of ground surface (i.e., LAI), an important trait related to
canopy structure. While comparable trends are observed in the last two acquisitions between WDVI
and the measured canopy properties (i.e., crop height and ground cover), in the initial evaluations
(i.e., 37 and 50 DAP) the trend observed for WDVI is in part the opposite of that described by these crop
traits (Figure 3b,c). This indicates that ground cover and canopy height might not have been sufficient
Remote Sens. 2019, 11, 224 14 of 47

to describe all differences between treatments regarding canopy architecture (i.e., other canopy traits,
such as LAI, varied between treatments), in particular for these first two data acquisitions.
Remote Sens. 2018, 10, x FOR PEER REVIEW 14 of 46
Changes on ground cover over time for sampling units (SUs) imaged using ground-based and
UAV sensors
reduction during the growing
in chlorophyll content and season are presented
in canopy in Figure A4.
structure-related traitsValues
on theoflast vegetation index
acquisition, in
(OSAVI) in both cases agree with results presented in Figures 3d–f
particular for the “non-mixed” system (T1). In addition, the visual correspondence between the and A3d, with a general reduction
in chlorophyll
datasets acquiredcontent
by bothandsensing
in canopy structure-related
methods traits onthat
(Figure A4) indicate the despite
last acquisition,
potentialin particular for
image-to-image
the “non-mixed” system (T1). In addition, the visual correspondence
residual registration errors and view-geometry dissimilarities, comparison between images between the datasets acquired by
is valid
both
in thesensing
context methods
of this (Figure
research. A4)Figure
indicate A5athat despite apotential
provides quantitativeimage-to-image
comparisonresidual of OSAVI registration
median
errors
values in the SUs evaluated corresponding to both sensing approaches (i.e., ground-based and of
and view-geometry dissimilarities, comparison between images is valid in the context this
UAV).
research. Figure A5a provides a quantitative comparison of OSAVI median
Disagreement between the values observed are found mainly in the first acquisition (38 DAP), which values in the SUs evaluated
corresponding
may be relatedto toboth sensing
residual approaches
background (i.e.,on
effects ground-based and UAV). to
the data corresponding Disagreement
the UAV imagery between duetheto
values observed
its coarser resolution.are found mainly in the first acquisition (38 DAP), which may be related to residual
background effects of
Assessments on disease
the data incidence
corresponding and to the UAV
severity imageryconsiderable
indicated due to its coarser resolution.
differences between
Assessments of disease incidence and severity indicated considerable
treatments over time (Figure 4 and Table A5). Since the first late blight symptoms were identified in differences between
treatments
the field (64over
DAP), time SUs(Figure 4 andwith
cultivated Table theA5). Since the first
T1 presented higher latelevels
blight ofsymptoms were identified
disease incidence or severity.in
the field (64 DAP),
Assessments madeSUs cultivated
on the same dateswithin thewhich
T1 presented
UAV and higher levels of disease
ground-based data were incidence
acquiredor severity.
indicate
Assessments
that these datasets provide a good description of the early stages of disease development (up to that
made on the same dates in which UAV and ground-based data were acquired indicate 15%
these
diseasedatasets provide
severity). Thealast good descriptionmade
assessment of thetogether
early stageswith of adisease
UAV development (up to 15%78disease
flight (i.e., performed DAP)
severity).
corresponds Thetolasttheassessment
date withmade together
the largest with a UAV
contrasts between flight (i.e., performed
treatments 78 DAP)
concerning leafcorresponds
chlorophyll
to the date with the largest contrasts between treatments concerning
content and traits related to canopy structure, amongst all dates having UAV and ground-based leaf chlorophyll content and
traits
imageryrelated to canopy
available structure,
(Figure 3 andamongst all dates
4). Therefore, crophaving
traitsUAV and ground-based
followed the observedimagery levels ofavailable
disease
(Figures 3 and 4). Therefore, crop traits followed the observed levels
severity and changes on vegetation biochemical and biophysical properties observed 64 and 78 DAP of disease severity and changes on
vegetation biochemical and biophysical properties observed 64 and
were potentially related to disease development. The association between disease severity and crop 78 DAP were potentially related
to disease
traits was development.
also verified through The association between disease
the rank correlation severity
coefficients and cropbetween
calculated traits was also verified
disease severity
through the rank correlation coefficients calculated between disease
classes and vegetation indices derived from UAV imagery acquired 64 and 78 DAP (Table A7). For severity classes and vegetation
indices
instance, derived fromcorrelation
a negative UAV imagery acquired 64 and
of approximately -0.5578wasDAP (Table A7).
observed betweenFor instance, a negative
disease severity and
correlation of approximately
REP, a vegetation index related −0.55 was
to leaf and observed
canopy between
properties disease
(Figureseverity and REP,a aconsiderable
A3), indicating vegetation
index related to
relationship leaf anddisease
between canopy properties
severity and (Figure
canopy A3), indicating
propertiesa in considerable
this specific relationship
assessment between
date.
disease severity
Correlations and canopy
between properties
disease severity in this
andspecific assessment
vegetation indicesdate. Correlations between
corresponding to 64 DAP disease
are
severity and vegetation indices corresponding to 64 DAP are
substantially less strong and not significant in comparison with those obtained for 78 DAP (Tablesubstantially less strong and not
significant
A7), whichin comparison
can be attributed withto those
the lowerobtained
levelsfor of 78 DAP incidence
disease (Table A7), andwhich can observed
severity be attributed to
on this
the lower levels of disease incidence and severity observed on this date.
date.

Figure 4. Distribution of visual disease scores into specific classes of late blight severity (according to
Figure 4. Distribution of visual disease scores into specific classes of late blight severity (according to
approximate percentage of affected leaf area at sampling unit level) for each assessment date. T1 and T2
approximate percentage of affected leaf area at sampling unit level) for each assessment date. T1 and
correspond to systems cultivated with a single cultivar (“non-mixed”) and with a mixture of different
T2 correspond to systems cultivated with a single cultivar (“non-mixed”) and with a mixture of
cultivars (“mixed”), respectively. Only assessments made 64 and 78 Days After Planting (DAP) were
different cultivars (“mixed”), respectively. Only assessments made 64 and 78 Days After Planting
followed by acquisition of ground-based and UAV data (*).
(DAP) were followed by acquisition of ground-based and UAV data (*).

3.2. Assessment of General Spectral Changes Related to Different Cropping Systems and Late Blight
Infection
Remote Sens. 2019, 11, 224 15 of 47

3.2. Assessment
Remote Sens. 2018,of
10,General Spectral
x FOR PEER Changes Related to Different Cropping Systems and Late Blight Infection
REVIEW 15 of 46

In Figure 5, average reflectance is presented for pixels grouped in discrete intervals of LLR. In this
In Figure 5, average reflectance is presented for pixels grouped in discrete intervals of LLR. In
case, higher LLR values indicate higher probability for coefficients h in distribution estimated for
this case, higher LLR values indicate higher probability for coefficients h in distribution estimated for
T1 in comparison to distribution estimated for T2. Although differences in spectra with relatively
T1 in comparison to distribution estimated for T2. Although differences in spectra with relatively
stronger relationship with T1 (i.e., higher probability in the distribution for T1), in comparison with
stronger relationship with T1 (i.e., higher probability in the distribution for T1), in comparison with
spectra having weaker association with this treatment, could be detected by both sensing approaches,
spectra having weaker association with this treatment, could be detected by both sensing approaches,
the relationship observed between spectral information and the treatment of interest (T1) using UAV
the relationship observed between spectral information and the treatment of interest (T1) using UAV
imagery was generally less intense. For example, in results corresponding to the first data acquisition
imagery was generally less intense. For example, in results corresponding to the first data acquisition
(37 DAP), considerable differences in the visible and near-infrared are observed when comparing the
(37 DAP), considerable differences in the visible and near-infrared are observed when comparing the
first LLR interval (Figure 5a, blue line) with other intervals (Figure 5a, color scale), for ground-based
first LLR interval (Figure 5a, blue line) with other intervals (Figure 5a, color scale), for ground-based
data. However, these differences were attenuated (i.e., lower variation indicated by ratio values and
data. However, these differences were attenuated (i.e., lower variation indicated by ratio values and
smaller range of LLR) in UAV images (Figure 5b).
smaller range of LLR) in UAV images (Figure 5b).

(a) (b)

(c) (d)

(e) (f)

Figure 5. Cont.
Remote
Remote Sens. 2018, 11,
Sens. 2019, 10, 224
x FOR PEER REVIEW 16 of
16 of 47
46

(g) (h)

Figure 5. Average reflectance for pixels from T1 (“non-mixed” system) grouped according to
Figure 5. Average reflectance for pixels from T1 (“non-mixed” system) grouped according to log-
log-likelihood ratio (LLR) in discrete intervals, between 0 to 15, in steps of 0.5. LLR, compares
likelihood ratio (LLR) in discrete intervals, between 0 to 15, in steps of 0.5. LLR, compares pixel-wise
pixel-wise probability estimated for T1 (H1) in contrast to T2 (H0; “mixed” system). Ground (a,c,e,g)
probability estimated for T1 (H1) in contrast to T2 (H0; “mixed” system). Ground (a,c,e,g) and UAV-
and UAV-based (b,d,f,h) data are presented for all acquisition dates. Colors of the average spectral
based (b,d,f,h) data are presented for all acquisition dates. Colors of the average spectral signatures
signatures indicate the average LLR value for pixels included in a given interval. Ratio indicates the
indicate the average LLR value for pixels included in a given interval. Ratio indicates the results for
results for the division of the reflectance (band-wise) corresponding to a given LLR interval by that
the division of the reflectance (band-wise) corresponding to a given LLR interval by that from the
from the interval with the lowest LLR values (i.e., pixels with LLR below 0.5; indicated by blue dashed
interval with the lowest LLR values (i.e., pixels with LLR below 0.5; indicated by blue dashed line).
line). Delta (∆) corresponds to the percentage of observations (pixels) within a given LLR interval for
Delta (Δ) corresponds to the percentage of observations (pixels) within a given LLR interval for T1
T1 subtracted from the percentage of observations in the same LLR interval for T2 (reference group).
subtracted from the percentage of observations in the same LLR interval for T2 (reference group).
Delta is plotted in front of the average spectral signatures representing each LLR interval. Delta total
Delta is plotted in front of the average spectral signatures representing each LLR interval. Delta total
(∆t) indicates absolute cumulated delta values.
(Δt) indicates absolute cumulated delta values.
In addition, spectral variation observed for both data sources in the first acquisition (37 DAP)
In addition,
indicate that plants spectral
from T1 variation observed
had potentially for both
smaller leafdata
areasources
index inincomparison
the first acquisition
with those(37 fromDAP)
T2
indicate that plants from T1 had potentially smaller leaf area index
(i.e., lower reflectance in the NIR). This trend follows the values of WDVI observed at the sampling in comparison with those from
T2 (i.e.,
unit lower
level reflectance
on this in the3f).
date (Figure NIR).
ForThis
imagestrend follows50
acquired the values
DAP of WDVI
(second dataobserved
acquisition;at the sampling
Figure 5c,d),
unit level on this date (Figure 3f). For images acquired 50 DAP (second
results obtained using camera on-board of the UAV platform differ from those observed at ground data acquisition; Figure 5c–
d), results obtained using camera on-board of the UAV platform differ
level. While the patterns derived from UAV data (Figure 5d) indicate mainly that plants from T1 had from those observed at ground
level. While
relatively the leaf
larger patterns derived
area index, from aligned
results UAV data (Figure
with 5-d) indicate
the ground mainly3f),
truth (Figure that plants from images
ground-based T1 had
relatively larger leaf area index, results aligned with the ground truth
indicated in general a larger variability for plants in T1, i.e., with some areas of the canopy on T1 (Figure 3f), ground-based
images indicated
characterized in general
by larger a larger
leaf area indexvariability
(i.e., higher for plants in T1,ini.e.,
reflectance thewith
NIR)some areas ofbythe
and others thecanopy
oppositeon
T1 characterized
characteristic. Thebythird
larger
dataleaf area index
acquisition (64(i.e.,
DAP) higher reflectance in by
was characterized thesmall
NIR)spectral
and others by the
differences
opposite characteristic. The third data acquisition (64 DAP) was
between pixels with stronger and relatively weak association with T1, for both datasets (ground characterized by small spectral
differences
and between
UAV images) pixels 5e,f).
(Figure with stronger
However, and relatively
larger weak were
differences association
detected with T1, for T1
between both anddatasets
T2 by
(ground and UAV
ground-based data images) (Figure
in this date. 5e,f).comparable
Finally, However, larger outputs differences were detectedand
between ground-based between T1 and
UAV images
T2 by ground-based
(Figure data infor
5g,h) were obtained thisthe
date.
lastFinally, comparable
data acquisition (78 outputs
DAP). Both between
sensing ground-based and UAV
approaches measured
imagesreflectance
lower (Figure 5g–h)in the were
NIRobtained
together for with thehigher
last data acquisition
reflectance in the (78visible
DAP).for Both sensing
spectra withapproaches
a stronger
measured lower reflectance in the NIR together with higher reflectance
relationship with T1, following trends described by the ground truth observations (Figures in the visible for spectra
3 and with
4),
a stronger relationship with T1, following trends described by the ground
i.e., potentially smaller leaf area index and lower leaf chlorophyll content coupled with higher levels truth observations (Figure
3 and
of 4), i.e.,
disease potentially
severity for plantssmaller
in T1.leaf area index and lower leaf chlorophyll content coupled with
higher In Figure A6, the pixel-wisefor
levels of disease severity plants
LLR in T1. for areas imaged using both sensing approaches
is presented
(i.e., camera on handheld mode and on-board of UAVfor
In Figure A6, the pixel-wise LLR is presented areas imaged
platform) from 37using both Differences
to 78 DAP. sensing approaches
between
(i.e., camerainon
treatments thehandheld
first two mode and on-board
acquisitions (37 and of 50 UAV
DAP)platform)
are small from 37 to 78
and related toDAP.
cultivarDifferences
intrinsic
between treatments
characteristics in the mainly
concerning first two acquisitions
canopy structure,(37 asand 50 DAP)
already are small
observed and related
in Figure to cultivar
5. Although late
intrinsic characteristics concerning mainly canopy structure, as
blight development started to be observed in the third acquisition (64 DAP), with higher disease already observed in Figure 5.
Although late blight development started to be observed in the third
incidence in plants cultivated with T1, spectral differences detected between treatments were small acquisition (64 DAP), with
higher disease
(Figures 5 and A6).incidence
In the inlastplants cultivated
acquisition with late
(78 DAP), T1, blight
spectral differences
incidence detected increased
and severity between
treatments
on were smallwith
plants cultivated (Figure 5 and A6).
T1, affecting theIncanopy
the last spectral
acquisition (78 DAP),
response late blight incidence
in comparison to T2. Pixels and
severity increased on plants cultivated with T1, affecting the canopy
indicated with relatively higher LLR values on ground-based images can be found spread across spectral response in comparison
to T2. Pixels
different partsindicated with relatively
of the canopy, higherinLLR
but especially areasvalues
with on ground-based
reduction in canopy images
(i.e.,can
LAI) be orfound
leaf
spread across different parts of the canopy, but especially in areas with reduction in canopy (i.e., LAI)
or leaf structure (i.e., compact layers originating the interface between air and cells within the
Remote Sens. 2019, 11, 224 17 of 47

structure (i.e., compact layers originating the interface between air and cells within the mesophyll,
as described by Jacquemoud et al. [77]) due to disease development (Figures 5g and A6g). Changes
in leaf chlorophyll content also potentially occurred in areas affected by the disease, as indicated
by variations observed in the green, red, and red-edge spectral regions in Figure 5g. UAV data
followed the same patterns observed for ground-based images, with higher values of LLR for spectra
corresponding to areas with smaller canopy and leaf structure, as well as lower leaf chlorophyll
content, although differences between T1 and T2 are attenuated in the lower resolution UAV imagery
(Figures 5h and A6h). A general quantitative assessment of the correspondence between ground- and
UAV-derived LLR values is provided in Figure A5b. From the patterns described by the median LLR
values, it is possible to observe that in general both data sources provided similar outputs, despite
some differences concerning the distribution of the LLR values within the sampling units and the
attenuation of the disease effects detected by the UAV imagery in comparison with the ground-based
data (as described in Figures 5 and A6).
Figure A7 illustrates the identification of areas from the canopy affected by late blight based on
log-likelihood ratio for images acquired at ground level. It is worth noting that UAV imagery followed
similar patterns observed on ground-based data, as observed in Figure A6, but the relationship with
disease development was less pronounced and areas of the canopy potentially affected by the pathogen
were eventually removed after background classification, since these areas were normally characterized
by smaller leaf area index and ground coverage, which could result in higher spectral mixing between
vegetation and background components. This may have affected the analysis outputs, however it was
preferred to remove this information from the dataset to be analyzed, since otherwise the potential of
the method implemented could be overestimated.

3.3. Effects of Specific Late Blight Severity Levels on the Crop Spectral Response
Different late blight severity classes were observed within each treatment. Therefore, a more
specific evaluation of progressive effects of the diseased development on the canopy reflectance are of
interest in order to better describe changes strictly related to the infection by the pathogen.
In Figure 6, average reflectance for pixels within discrete intervals of LLR are described. In this
case, LLR values correspond to the evaluation of SUs classified according to a specific disease severity
level (H1) against healthier SUs used as reference (H0). Only small differences between spectra with
relatively stronger relationship with diseased areas in contrast with those weakly associated with
these patches are observed for disease severity up to 1.0% or between 1.0% and 2.5% (Figure 6a,b).
Conversely, larger differences were detected for disease severity levels starting between 2.5% and
5.0% until between 10.0% and 15.0% (Figure 6c–f). In these cases, differences are mainly observed in
the red-edge and NIR spectral regions, indicating that wavelengths in this interval may have greater
discriminative potential concerning reflectance measured for healthy and diseased areas.
Spectral signatures corresponding to pixels with higher LLR for each severity class (i.e.,
“characteristic” spectral signatures of each class) change as disease intensity increases, in particular
when severity levels between 2.5% and 7.0% (Figure 6c,d) are compared with those between 7.0% and
15.0% (Figure 6e,f). These differences indicate that characteristic spectra of lower disease intensities
(between 2.5% and 7.0%; Figure 6c,d) correspond mainly to areas with potentially smaller leaf area
index, since reflectance in the NIR region is especially low in these cases. As disease severity increases,
reflectance in the NIR also increases for characteristic spectra of diseased patches, which indicates
that in part these spectral signatures correspond to areas with larger canopy architecture (i.e., LAI) in
comparison with lower disease intensity areas.
Remote Sens. 2018, 10, x FOR PEER REVIEW 18 of 46
Remote Sens. 2019, 11, 224 18 of 47

(a)
(b) (c)

(d) (e) (f)

Figure 6. Average reflectance for pixels within discrete intervals of log-likelihood ratio (LLR) between
Figure 6. Average reflectance for pixels within discrete intervals of log-likelihood ratio (LLR) between
0 and 5.5, in steps of 0.5. LLR, in this case, compares pixel-wise probability estimated for diseased
0 and 5.5, in steps of 0.5. LLR, in this case, compares pixel-wise probability estimated for diseased
sampling
sampling units
units(SUs; H1;
(SUs; ≤1.0%,
H1;≤ 1.0%, ≤ 2.5%, ≤
≤ 2.5%, 5.0%, ≤≤7.0%,
≤ 5.0%, 7.0%,≤≤10.0%
10.0%andand≤ ≤ 15.0%
15.0% disease
disease severity,
severity, in in
a–f,a–f,
respectively)
respectively) in in
contrast
contrasttotohealthier
healthierSUs
SUs(H0;
(H0;only
only healthy plantsfor
healthy plants for64
64DAP
DAPorordisease
disease severity
severity below
below
1.0%
1.0%forfor
7878
DAP).
DAP). Colors
Colorsofofthe
thespectral
spectralcurves
curves indicate the average
indicate the averageLLRLLRvalue
valueforforpixels
pixels included
included in in
a given
a given interval. Ratio indicates the division of reflectance corresponding to a given LLR interval byby
interval. Ratio indicates the division of reflectance corresponding to a given LLR interval
that from
that fromthethe
interval
intervalwith
withthe
thelowest
lowestLLRLLRvalues
values (i.e., for pixels
(i.e., for pixelswith
withLLRLLRbelow
below0.5;0.5;indicated
indicatedbyby thethe
blue line). Delta (∆) corresponds to the percentage of observations (pixels) within a
blue line). Delta (Δ) corresponds to the percentage observations (pixels) within a given LLR interval given LLR interval
forfor
SUs SUsfrom
froma specific
a specificdisease
diseaseseverity
severityclass
class subtracted
subtracted fromfrom the
the percentage
percentageofofobservations
observations in in
thethe
same
same LLRLLRinterval
intervalfor
forthe
thehealthier
healthierSUsSUsused
used as
as reference. Deltaisisplotted
reference. Delta plottedininfront
frontofofeach
each average
average
spectral
spectralsignature
signatureforforthe
thecorresponding
correspondingLLR LLR interval. Delta total
interval. Delta total(Δt)
(∆t)indicates
indicatesabsolute
absolutecumulated
cumulated
delta
delta values.
values.

These trends
These trendsareareconfirmed
confirmedby byFigure
Figure 7,7, which
which shows the distribution
shows the distributionofofLLR LLRvalues
values forfor each
each
severity
severity class,
class, inin SUsimaged
SUs imaged64 64and
and78 78DAP.
DAP.Crop
Crop patches
patches with
with disease
diseaseseverity
severitybetween
between 2.5%
2.5% andand
7.0% (Figure 7c,d) have higher LLR values concentrated in regions with low leaf area
7.0% (Figure 7c,d) have higher LLR values concentrated in regions with low leaf area index, mainly in index, mainly
theinboundary
the boundary
of theof the region
region retainedretained
duringduring background
background removal.removal. Conversely,
Conversely, patchespatches with
with severity
severity between 7.0% and 15.0% (Figure 7e,f) have pixels with higher LLR values spread
between 7.0% and 15.0% (Figure 7e,f) have pixels with higher LLR values spread in different areas of in different
theareas of theunit,
sampling sampling
althoughunit, although
segments withsegments with structure
low canopy low canopy (i.e.,structure
low LAI)(i.e., low
in the LAI) in of
boundary thethe
boundary of the vegetated area are still frequently identified as strongly related to
vegetated area are still frequently identified as strongly related to late blight infected plants. late blight infected
plants.
Pixels with the highest values of LLR for SUs with disease severity between 1.0% and 2.5%
(low severity level; Figure 7b) are mostly concentrated in specific parts of the canopy. Since this
severity class was exclusively observed in SUs cultivated with T2 (“mixed” system), areas with higher
LLR in this case may be related to specific potato cultivar(s) with lower resistance to late blight.
Therefore, characteristic traits for these cultivars may be the reason why differences between average
spectra for diseased in contrast to healthier plants indicate potentially larger leaf area index (i.e., higher
reflectance in the NIR) for areas related to disease incidence.
Remote Sens. 2019, 11, 224 19 of 47
Remote Sens. 2018, 10, x FOR PEER REVIEW 19 of 46

(a)

(b)

(c)

(d)

(e)

(f)

Figure 7. Distribution of log-likelihood ratio (LLR) within sampling units (SUs) scored for late blight
Figure 7. Distribution of log-likelihood ratio (LLR) within sampling units (SUs) scored for late blight
development 64 and 78 DAP. Date of UAV image acquisition and corresponding disease severity class
development 64 and 78 DAP. Date of UAV image acquisition and corresponding disease severity class
(DS) are indicated above the images representing eight SUs selected from those observed for each
(DS) are indicated above the images representing eight SUs selected from those observed for each
class. Crop patches cultivated with T1 (“non-mixed” system) are indicated by red frames and those
class. Crop patches cultivated with T1 (“non-mixed” system) are indicated by red frames and those
cultivated with T2 (“mixed” system) by black frames (images chosen for illustration were randomly
cultivated with T2 (“mixed” system) by black frames (images chosen for illustration were randomly
selected from those observed in each disease severity class, as indicated in Table A5). Diseased severity
selected from those observed in each disease severity class, as indicated in Table A5). Diseased
classes from up to 1.0% until between 10.0% and 15.0% are represented in images (a–f). Scale bars in
severity classes from up to 1.0% until between 10.0% and 15.0% are represented in images (a–f). Scale
the left upper corner of each image represent 25 cm.
bars in the left upper corner of each image represent 25 cm.
Finally, the distribution of pixels with relatively higher LLR values in SUs with the lowest severity
classPixels with the
considered (i.e.,highest values
up to 1.0% of LLR Figure
severity; for SUs7a) with
aredisease severity
associated withbetween
areas with1.0% and 2.5%
smaller leaf (low
area
severity level; Figure 7b) are mostly concentrated in specific parts of the
index in the boundary between vegetation and background, or eventually spread in different parts ofcanopy. Since this severity
class was exclusively
the sampling unit. Despite observed in SUs
the fact thatcultivated with T2 observed
the characteristics (“mixed”are system),
typicalareas with higher
of disease LLR in
development,
this case may
the small be related
spectral to specific
differences potato
observed andcultivar(s) with lower
low LLR values resistance
obtained to late
indicate thatblight. Therefore,
these differences
characteristic traits for these cultivars may be the reason why differences
were small and of difficult detection (i.e., weak evidence of considerable contrast between healthy between average spectra
and
for diseased
diseased areas). in contrast to healthier plants indicate potentially larger leaf area index (i.e., higher
reflectance
In Figurein the NIR) for areas
8, distribution ofrelated
values to disease
from incidence.
vegetation indices with relatively good performance
Finally, the distribution of pixels with relatively
(according to results in Table A8) concerning the discrimination higher LLRbetweenvalues in SUsSUs
withwith
very thelowlowest
late
severity class considered (i.e., up to 1.0% severity; Figure 7a) are associated
blight incidence (≤1.0%; reference) from those with higher disease severity levels (between 2.5 and with areas with smaller
leaf
5.0%area
andindex in the10.0
between boundary
and 15.0% between vegetation
severity) and background,
are presented, for data or eventually
acquired spread
78 DAP. It in
is different
possible
parts of the sampling unit. Despite the fact that the characteristics observed
to notice that considering all pixels for each disease severity class (Figure 8a,d,g), differences are typical of disease
with
development, the small reference
respect to the healthier spectral differences
are observed observed
only for and
thelow LLR values
highest level ofobtained indicate that
disease severity these
(≤15.0%).
differences
After werespectra
selecting small and of difficult
according detection
to their (i.e., weak
association withevidence of considerable
a given severity contrast
class (i.e., basedbetween
on LLR
healthy and diseased areas).
values), differences between reference and other classes only increased for SUs within the ≤15.0%
In Figure
disease severity8,category
distributionwhenofvegetation
values from vegetation
indices indices
used were with relatively
sensitive goodcontent
to chlorophyll performanceat leaf
(according to results in Table A8) concerning the discrimination between SUs with very low late
blight incidence (≤ 1.0%; reference) from those with higher disease severity levels (between 2.5 and
After selecting spectra according to their association with a given severity class (i.e., based on LLR
values), differences between reference and other classes only increased for SUs within the ≤ 15.0%
disease severity category when vegetation indices used were sensitive to chlorophyll content at leaf
and canopy level (i.e., CIre and REP; Figure 8b,c,e,f). Discrimination between reference and relatively
low disease
Remote severity
Sens. 2019, 11, 224 (≤ 5.0% disease severity class) is only observed for vegetation index associated 20 of 47
with canopy structure (i.e., WDVI; Figure 8g–i). This discrimination was improved after selecting
spectral information more intensely related to the specific disease severity classes considered.
and canopy levelincreased
Therefore, (i.e., CIre discriminative
and REP; Figurepotential
8b,c,e,f). of
Discrimination between
selected spectra reference
according and relatively
to LLR values is
low disease severity ( ≤ 5.0% disease severity class) is only observed for vegetation
observed, not only for higher disease severity levels but also for relatively low disease incidenceindex associated
(i.e.,
with canopy structure (i.e., WDVI; Figure 8g–i). This discrimination was improved
with disease severity between 2.5 and 5.0%). These results illustrate the increased potential for lateafter selecting
spectral information
blight severity more intensely
assessment based on related
selectedtospectral
the specific disease severity
information classes
related to considered.
disease incidence.

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 8.8.Distribution
Figure Distributionof of
vegetation indices
vegetation (VIs;(VIs;
indices CIre (a–c);
CIre REP
(a–c);(d–f);
REPWDVI
(d–f);(g–i))
WDVI values forvalues
(g–i)) sampling
for
units within different disease severity (DS) classes. Only selected VIs providing relatively
sampling units within different disease severity (DS) classes. Only selected VIs providing relatively good
discriminative potential
good discriminative between
potential healthier
between references,
healthier and the
references, andDS theclasses considered
DS classes considered(Table A8) A8)
(Table for
UAV imagery acquire 78 DAP are presented. Green dots indicate pixels within a given DS class (from
≤1.0% up to between 10.0% and 15.0%), while red dots and red error bars correspond to median
and standard deviation for these observations. Values in parentheses indicate the log-likelihood ratio
(LLR) threshold used to selected pixels in a given percentile. Black dashed lines separate healthier
observations (references—*) from other DS classes. Blue dashed lines indicate the average VI value for
pixels included in the references. It is worth noting that for the percentiles, two distinct sets of pixels
represent the reference, one for each DS class above 1.0% DS.
Remote Sens. 2018, 10, x FOR PEER REVIEW 21 of 46

Remote Sens. 2019, 11, 224 21 of 47


for UAV imagery acquire 78 DAP are presented. Green dots indicate pixels within a given DS class
(from ≤ 1.0% up to between 10.0% and 15.0%), while red dots and red error bars correspond to median
and standard deviation for these observations. Values in parentheses indicate the log-likelihood ratio
Therefore, increased discriminative potential of selected spectra according to LLR values is
(LLR) threshold used to selected pixels in a given percentile. Black dashed lines separate healthier
observed, not only for higher disease severity levels but also for relatively low disease incidence
observations (references – *) from other DS classes. Blue dashed lines indicate the average VI value
(i.e., with disease severity between 2.5 and 5.0%). These results illustrate the increased potential for
for pixels included in the references. It is worth noting that for the percentiles, two distinct sets of
late blight severity
pixels assessment
represent based
the reference, oneach
one for selected spectral
DS class above information
1.0% DS. related to disease incidence.

3.4.3.4.
Spatial Patterns
Spatial PatternsofofVisual
VisualDisease
DiseaseAssessment
Assessment Compared with Outputs
Compared with OutputsofofSimplex
SimplexVolume
Volume Maximization
(SiVM) and Log-Likelihood Ratio Applied to UAV Imagery
Maximization (SiVM) and Log-Likelihood Ratio Applied to UAV Imagery
The distribution of disease severity scores for SUs assessed 78 DAP are presented in Figure 9.
The distribution of disease severity scores for SUs assessed 78 DAP are presented in Figure 9.
AsAs also indicated
also indicated ininFigure
Figure4,4,experimental
experimental plots
plots cultivated withT1
cultivated with T1were
wereiningeneral
generalcharacterized
characterized byby
more intense development of late blight. SUs with relatively high disease severity levels
more intense development of late blight. SUs with relatively high disease severity levels (i.e., above (i.e., above
10.0%)
10.0%) were
were generally
generallylocated
locatedininpatches
patcheswith
with potentially lowerleaf
potentially lower leafchlorophyll
chlorophyllcontent
contentandand smaller
smaller
canopy structure, as indicated by lower OSAVI values, which might be related
canopy structure, as indicated by lower OSAVI values, which might be related with disease with disease incidence,
as incidence,
also indicated by rank
as also correlation
indicated between
by rank VIs andbetween
correlation disease severity
VIs and(Table
diseaseA7). Association
severity (Tablebetween
A7).
disease severitybetween
Association and crop general
disease vitalityand
severity is less evident
crop forvitality
general lower disease severityfor
is less evident levels, in particular
lower disease
forseverity
SUs with late in
levels, blight severity
particular forbelow 5.0%.
SUs with late blight severity below 5.0%.

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Figure 9. Distribution of visual scores into specific classes of late blight severity (according to the
approximate percentage of affected leaf area) in SUs evaluated 78 DAP. Experimental plots 1–8 are
indicated by figures (a–h). Background images include values of OSAVI (Optimized Soil Adjusted
Vegetation Index; VI) for pixels retained after vegetation segmentation and a false color composite (833,
663, and 609 nm as RGB bands).
disease severity classes for the last two data acquisitions (i.e., 64 and 78 DAP).
For 78 DAP, it is possible to notice that while distributions for low disease severity levels (below
5.0% severity) result in LLR values loosely correlated with disease severity, relatively high disease
severity classes yield distributions with corresponding LLR values better correlated with disease
Remote severity. This
Sens. 2019, 11, 224is probably due to the similarity observed between characteristic spectra (i.e., spectra
22 of 47
from pixels with relatively high LLR) for disease severity classes above 5.0% severity, as described in
Figure 6, which lead to a relatively good correlation between LLR values and disease severity for
The corresponding LLR values for SUs assessed 78 DAP are presented in Figure 10, in this case
observations within these classes. It is worth noting that considering information corresponding to
considering 7.0% of late blight severity as hypothesis H1. SUs with higher disease severity levels
upper percentiles of LLR values did not improve the relationship between LLR and disease severity
(i.e., above 10%) were characterized by pixels with higher LLR values spread in different parts of the
classes. This may be related to the same factor cited before, i.e., similarity between characteristic
canopy, as also indicated in Figure 7. In contrast, SUs with lower severity levels have pixels with
spectral response corresponding to SUs with higher disease severity levels and their relative
higher LLR values concentrated in patches with smaller leaf area index, in the boundary between the
dissimilarity with spectra characteristic of patches having lower disease incidence, which was not
crop canopy and background.
altered after selecting observations within upper percentiles of LLR.

(a) (b)

Remote Sens. 2018, 10, x FOR PEER REVIEW 23 of 46


(c) (d)

(e) (f)

(g) (h)

Figure
Figure 10. Log-likelihoodratio
10. Log-likelihood ratio(LLR)
(LLR)forfor UAV
UAV data
data acquired
acquired 78 DAP
78 DAP (i.e.,(i.e.,
last last
datadata acquisition).
acquisition). LLR
LLR represents the comparison of pixel-wise probability considering distributions for
represents the comparison of pixel-wise probability considering distributions for diseased SUs (H1: diseased SUs≤
7.0%≤
(H1: 7.0% severity)
severity) and a healthy
and a healthy referencereference (H0:
(H0: up to uplate
1.0% to 1.0% late
blight blight severity).
severity). OSAVI valuesOSAVIarevalues are
indicated
indicated in grey
in grey scale (VI).scale (VI). Experimental
Experimental plots 1–8 plots 1–8 are represented
are represented in figuresin(afigures
–h). (a–h).

Visually comparing Figures 9 and 10, the association between LLR values and disease severity can
Table 2. Kendall-tau correlation coefficients between disease severity classes (as ordinal variable) and
be observed, notably with higher disease incidence in plots 1 (Figure 9a), 3 (Figure 9c), 7 (Figure 9g),
median of log-likelihood ratio (LLR as continuous variable) at sampling unit level for assessment
and 8 (Figure 9h), followed by higher LLR values in SUs located in these plots (Figure 10a,c,g,h). This
made 64 and 78 DAP. Values are given for each disease severity class used to derive probability
distributions, which were compared with the distribution for the reference class (only healthy patches
for 64 DAP and ≤ 1.0% severity for 78 DAP) during estimation of pixel-wise LLR.

Disease severity class considered for LLR calculation

Dataset 64 DAP2 78 DAP2

≤ 1.01 ≤ 2.51 ≤ 5.01 ≤ 7.01 ≤ 10.01 ≤ 15.01


Remote Sens. 2019, 11, 224 23 of 47

trend can be verified in Table 2, which describes the rank correlation between LLR values and disease
severity classes for the last two data acquisitions (i.e., 64 and 78 DAP).

Table 2. Kendall-tau correlation coefficients between disease severity classes (as ordinal variable) and
median of log-likelihood ratio (LLR as continuous variable) at sampling unit level for assessment made
64 and 78 DAP. Values are given for each disease severity class used to derive probability distributions,
which were compared with the distribution for the reference class (only healthy patches for 64 DAP
and ≤1.0% severity for 78 DAP) during estimation of pixel-wise LLR.

Disease Severity Class Considered for LLR Calculation


Dataset 64 DAP 2 78 DAP 2
≤1.0 1 ≤2.5 1 ≤5.0 1 ≤7.0 1 ≤10.0 1 ≤15.0 1
All pixels 0.249 * 0.020 0.321 *** 0.592 *** 0.519 *** 0.522 ***
Upper 20th percentile of LLR 0.286 * −0.029 0.106 0.562 *** 0.534 *** 0.524 ***
Upper 10th percentile of LLR 0.313 * −0.038 0.074 0.556 *** 0.537 *** 0.516 ***
1Significant at 0.05 (*), 0.01 (**) or 0.001 (***) level; 2 only observations from T1 considered for 64 DAP while data
corresponding to both treatments were used for 78 DAP.

For 78 DAP, it is possible to notice that while distributions for low disease severity levels (below
5.0% severity) result in LLR values loosely correlated with disease severity, relatively high disease
severity classes yield distributions with corresponding LLR values better correlated with disease
severity. This is probably due to the similarity observed between characteristic spectra (i.e., spectra
from pixels with relatively high LLR) for disease severity classes above 5.0% severity, as described
in Figure 6, which lead to a relatively good correlation between LLR values and disease severity for
observations within these classes. It is worth noting that considering information corresponding to
upper percentiles of LLR values did not improve the relationship between LLR and disease severity
classes. This may be related to the same factor cited before, i.e., similarity between characteristic spectral
response corresponding to SUs with higher disease severity levels and their relative dissimilarity with
spectra characteristic of patches having lower disease incidence, which was not altered after selecting
observations within upper percentiles of LLR.
A quantitative overview of the LLR values distribution according to the late blight incidence
levels is provided in Figure 11. It is possible to notice that for low severity levels (i.e., below 5.0%,
Figure 11a–c) the number of pixels with relatively high LLR is larger for the SUs scored with the severity
level considered (black lines in Figure 11). At the same time, the number of pixels with relatively
high LLR values is smaller for SUs with lower or higher disease severity than the class considered
(green and red lines in Figure 11). This indicates that characteristic spectral signatures for diseased
SUs (i.e., corresponding to pixels with high LLR values) were relatively specific to each severity level
in this case. Conversely, for SUs scored with higher severity levels (i.e., above 5%, Figure 11d–f),
the number of pixels with relatively high LLR values increases progressively from SUs with lower
disease severity levels (i.e., green lines) to SUs with higher disease severity levels (i.e., red lines) than
the class considered. This indicates that characteristic spectral signatures for diseased areas are similar
for severity classes corresponding to higher infection levels, considering the progression observed in
the LLR distribution.
Results presented in Figure 11 reflect outputs already presented before in Figures 6–10 and Table 2,
which indicate that lower disease severity classes had pixels with higher LLR values concentrated in
patches with smaller leaf area index (Figures 7, 9 and 10), differing from SUs with higher late blight
incidence, which were characterized by distribution of pixels with higher LLR values in different parts
of the canopy.
Remote Sens. 2018, 10, x FOR PEER REVIEW 24 of 46

lines) than the class considered. This indicates that characteristic spectral signatures for diseased
areas Sens.
Remote are 2019,
similar for
severity classes corresponding to higher infection levels, considering
11, 224 24 ofthe
47
progression observed in the LLR distribution.

(a) (b)

(c) (d)

(e) (f)

Figure 11.Distribution
Figure 11. Distributionof of
log-likelihood ratioratio
log-likelihood (LLR) values
(LLR) derived
values for patches
derived of UAV of
for patches images
UAVacquired
images
64 and 78 DAP. Black lines correspond to LLR extracted from sampling units (SUs) within the late
acquired 64 and 78 DAP. Black lines correspond to LLR extracted from sampling units (SUs) within
blight severity level (disease severity (DS)) considered as hypothesis H1 (≤1.0%, ≤2.5%, ≤5.0%, ≤7.0%,
the late blight severity level (disease severity(DS)) considered as hypothesis H1 (≤ 1.0%, ≤ 2.5%, ≤
≤10.0% and ≤15.0% of disease severity in a–f, respectively), while comparing with healthier plants (H0,
5.0%, ≤ 7.0%, ≤ 10.0% and ≤ 15.0% of disease severity in a–f, respectively), while comparing with
completely healthy for 64 DAP and ≤1.0% severity for 78 DAP). Green lines indicate the distribution of
healthier plants (H0, completely healthy for 64 DAP and ≤ 1.0% severity for 78 DAP). Green lines
LLR values for SUs with lower severity levels than the class considered in each case (e.g., all sampling
indicate the distribution of LLR values for SUs with lower severity levels than the class considered in
units with disease severity ≤1.0% in b). Red lines illustrate the distribution of LLR values for SUs with
each case (e.g., all sampling units with disease severity ≤ 1.0% in b). Red lines illustrate the distribution
higher severity levels than the class considered in each case (e.g., all sampling units with >2.5% of
of LLR values for SUs with higher severity levels than the class considered in each case (e.g., all
disease severity in b).
sampling units with > 2.5% of disease severity in b).
4. Discussion
Results presented in Figure 11 reflect outputs already presented before in Figures 6–10 and Table
Measurements
2, which oflower
indicate that crop traits, described
disease severity in Section
classes had3.1 (Figures
pixels with 3higher
and A3),
LLRindicate that the first
values concentrated
three data acquisitions
in patches with smaller(from 37 to
leaf area 64 days
index after7,planting—DAP)
(Figure were
9, and 10), differing performed
from SUs with while crop
higher growth
late blight
progressed towards full canopy development, which occurred between 50 and 64 DAP.
incidence, which were characterized by distribution of pixels with higher LLR values in different During this
period, the main changes
parts of the canopy. observed in crop traits were related to increase in leaf area index for plants
cultivated in both treatments. Differences between treatments that were observed in early stages
(between 37 and 50 DAP) can be attributed mainly to cultivar intrinsic characteristics. At 64 DAP,
4. Discussion
more substantial differences between treatments, concerning leaf chlorophyll content and canopy
Measurements of crop traits, described in section 3.1 (Figure 3 and A3), indicate that the first
structural traits (e.g., ground cover), were observed, which may be related to initial stages of late
three data acquisitions (from 37 to 64 days after planting – DAP) were performed while crop growth
blight development (Figure 4). In the last data acquisition (78 DAP), differences between treatments
progressed towards full canopy development, which occurred between 50 and 64 DAP. During this
increased following the increase in disease severity in particular for T1 (i.e., “non-mixed” system),
which confirms trends observed 64 DAP.
Remote Sens. 2019, 11, 224 25 of 47

The variability in crop development observed during the growing season, related to disease
development or not, could be detected through optical imagery acquired using ground-based or
UAV imaging setups. For most of the acquisition dates, patterns observed for ground-based data are
comparable to those derived from UAV (Figures 5 and A6). However, in some cases disagreement was
observed, in particular in the second and third data collection (50 and 64 DAP). Differences between
analysis outputs resulting from ground-based and UAV data are mainly related to the higher spatial
resolution of the ground-based imagery. The increased resolution allowed a better description of the
variability present within the crop canopy, together with a better retention of this variability after
background removal due to the potential lower degree of spectral mixing for this dataset.
In addition, spectral variability related to disease incidence could be well described by ground-
based imagery (Figures 5 and A6a,c,e,g). On the other hand, UAV data indicated trends similar to
those observed in the ground-based images, in particular for relatively high disease severity levels,
but these trends were attenuated on this data source (Figures 5 and A6b,d,f,h). A potential limitation
in sensitivity for data acquired at canopy level, and even at leaf level, if spatial resolution is reduced
has been indicated by other authors [15,78]. However, in the present study it was observed that
even at low levels of disease severity (between 2.5 and 5% severity), spectral information related to
the disease incidence could be derived from radiometric measurements made by sensors on-board
of a UAV platform (Figures 5–7 and A6), with relatively low spatial resolution (approximately 4–5
cm of ground sampling distance) in comparison with ground-based information (with 0.1–0.2 cm
of spatial resolution). This indicates that spectral data acquired at canopy level with sub-decimeter
resolution has potential to describe spectral changes related to disease incidence, in particular if
analysis targeting the most related spectral information with diseased patches is used (Figure 8).
Similar results have been reported in other studies regarding the use of UAV optical imagery to
assess disease incidence in other crops [14,18,22]. Generally, in these studies, parametric statistical
frameworks (e.g., analysis of variance and groups means test) are used to evaluate the discriminative
potential of spectral information regarding disease incidence. This was performed here to compare
treatments (Figure 3) but not to evaluate specific changes related to different disease severity levels.
Implementing such analysis for evaluating the impact of different disease severity classes on the
canopy reflectance was not possible in this research since the distribution of disease incidence classes
differed between treatments and experimental blocks, which could lead to a biased evaluation in this
case. On the other hand, the evaluation reported in Figures 3 and 8 indicates that discrimination
between treatments and different disease severity levels based on aggregate information at sampling
unit level (i.e., distribution of vegetation indices for the imaged patches), as frequently performed
in other studies, is possible for relatively higher disease severity levels, although focusing on the
identification of specific spectral information related to diseased areas improved the characterization
of lower levels of disease severity through the spectral information gathered.
An interesting method for late blight monitoring in potato based on optical imagery with very high
resolution has been presented by Sugiura et al. [38]. The solution introduced by these authors provided
accurate disease severity prediction based on RGB color transformation and pixel-wise classification
through threshold optimization procedure. However, these authors relied on color features rather than
on reflectance measurements, which may reduce the flexibility of the approach proposed regarding
its application under diverse data acquisition conditions (i.e., with changes in illumination and field
of view, etc.), and when disease incidence occurs simultaneously with other abiotic or biotic stress
factors. In this sense, optimization for different datasets acquired would be required. Therefore,
using reflectance information rather than color-based features could allow mitigation of some of these
limitations, in particular if coupled with methods proposed to compensate for illumination changes and
to perform BRDF effects correction, as those described by Honkavaara et al. [47]. Also, using multi- or
hyperspectral datasets may allow improvement of discrimination potential concerning the classification
of healthy and diseased areas due to increased availability of features potentially related to the effects
of disease incidence on the crop canopy traits. More recently, Duarte-Carvajalino et al. [39] performed
Remote Sens. 2019, 11, 224 26 of 47

machine learning-based retrieval of late blight severity in potato, using a very high resolution camera,
similarly to Sugiura et al. [38] but using a modified set-up (acquiring images in blue, green, and NIR
wavelengths instead of conventional RGB). They performed radiometric calibration for the acquired
imagery, despite the limitations of the sensor system used, and included in their analysis datasets
corresponding to different dates over the growing season. Performance comparable to that obtained
by Sugiura et al. [38] was reported, in particular for models derived using Convolutional Neural
Networks, indicating potential for similar applications in this context.
In the present research, in contrast to results previously reported in the literature, an attempt is
made to effectively relate disease development over time with spectral changes in a dataset composed
of sub-decimeter resolution UAV optical imagery. From the spectral differences observed between
treatments and disease severity classes (Figures 5–7 and A6), it is possible to conclude that disease
incidence, even at relatively low levels, has direct effects on the canopy spectral response measured
by sensors similar to that used in the study. On the other hand, intrinsic characteristics of different
potato cultivars may potentially affect the spectral response observed and lead to spurious correlation
between spectral data and disease severity observations, in particular for low late blight severity levels.
This can be an important aspect to consider during the development of future modelling approaches,
especially if based on data acquired in agronomic experiments with multiple cultivars.
Spectral changes that could be associated with late blight development, mainly based on
measurements realized on the last data acquisition (78 DAP), were characterized by reduced reflectance
on all spectral bands measured using the UAV sensor (Figure 6). This indicates that changes detected
were strongly related to alterations in the canopy and leaf structure [19]. In general, as the relationship
between the spectral information and the disease severity levels became stronger (i.e., as LLR values
increased), the reflectance decreased in all spectral bands for disease classes above 2.5% severity.
Expected spectral changes related to pigment content at leaf and canopy levels could mainly be
identified in the red-edge region, which is also associated with canopy and leaf structural traits.
Deviations in the red region, more directly related to changes in chlorophyll content (i.e., increase
in reflectance for diseased vegetation due to lower chlorophyll content), were less evident even for
higher levels of disease severity (i.e., above 2.5% severity). These facts indicate that the main areas
that could be related to disease development were those with reduced canopy (i.e., LAI) and leaf
(i.e., number of layers specifying air/wall interfaces within the leaf mesophyll) structure. Changes
related to pigment content at leaf and canopy levels were less pronounced than changes related to
canopy and leaf structural alterations. Figures 7–11 confirm these observations and indicate that
extremely early alterations related to pigment content degradation in the infected tissues may be
more difficult to detect using UAV imagery with the same characteristics as those used in this study.
Conversely, alterations in the red and green regions could be observed on ground-based spectral
measurements 64 and 78 DAP while, as already described, only small changes could be detected using
the UAV imagery for very low disease severity levels on these dates (Figures 5 and 6). It is worth
noting that for very early infection stage (≤1.0% disease severity) alterations in the visible part of
the spectrum were observed in the UAV data (Figure 6a), but LLR values for the changes observed
were very small, indicating that these alterations would probably be difficult to detect in more general
applications. Also, changes observed for disease severity between 1.0 and 2.5% (Figure 6b) followed
the opposite trend of that expected, with overall increased reflectance in the Vis-NIR region for spectra
related to the disease incidence (i.e., higher LLR values). This is probably due to the association
between traits of specific susceptible cultivars(s) to disease incidence, which is indicated by higher
values of LLR concentrated in specific spots within the crop canopy (Figure 7b), explaining the inverse
trend observed.
An important final aspect to consider is the relationship between the type of spectral information
derived from the UAV imagery acquired and the outputs of the analysis relating spectral information
with disease incidence and severity. The UAV data used as input for analysis, with results described
in Section 3, combines all data collected in a given location in the field during the UAV flight. This
Remote Sens. 2019, 11, 224 27 of 47

combined information was derived taking the pixel-wise average for the complete dataset acquire,
i.e., considering all scenes obtained over each imaged area. This “average spectral data product”,
as thoroughly discussed by Aasen and Bolten [3], is characterized by reduced influence of angular
properties on the reflectance representing a given crop surface. While this is a desirable feature for
spectral datasets used to detect very subtle changes in the canopy reflectance, as those related to
early disease detection, some sensitivity may be lost regarding the characterization of lower parts of
the crop canopy. This can potentially be a reason for the relatively low association between spectral
information and disease incidence (i.e., low LLR values) for some image patches with relatively high
disease severity (Figure 7).
Studies focusing on the relationship between reflectance angular properties and crop trait
estimation, as those performed by Roosjen et al. [5] using UAV imagery and Kong et al. [79] using
point-based multi-angular spectral measurements, indicate that considering multiple view-angles may
increase the potential for crop trait characterization, and that estimation of properties at lower parts of
the crop canopy are better performed based on slightly off-nadir reflectance measurements. The latter
is attributed to the fact that off-nadir measurements may have greater probability to correspond to
reflected light having interacted with lower parts of the canopy, especially for wavelengths in the
visible part of the spectrum. This fact may be particularly relevant for early late blight assessment,
since the disease onset generally occurs in lower parts of the canopy, and therefore nadir oriented
measurements, may miss the local changes in pigment content occurring in these areas.

5. Conclusions
In this study, the potential of radiometric readings in the optical domain to describe visual
ratings regarding the development of potato late blight was evaluated from a perspective of the
sensitivity of the spectral information to describe early changes occurring in the infected canopy areas.
It was verified that optical data acquired at canopy level with sub-decimeter resolution has potential
to provide useful information for detecting late blight incidence and assessing its severity in early
stages of disease development (i.e., between 2.5 and 5.0% disease severity). Despite these positive
outputs, the main changes detected were related to crop canopy structural traits, and to a lesser extent,
to pigment content.
The evaluation performed here focused on post-visual disease symptoms and its relationship with
changes in spectral response at canopy level. It was observed that although aggregated information
at sampling unit level (i.e., distribution of vegetation indices values) allowed, to a certain extent,
to differentiate contrasting treatments and disease severity levels, early detection of late blight might
be difficult based on frameworks involving similar approaches. Conversely, better descriptive potential
was observed when specific spectral information regarding a given treatment or disease severity level
was identified through SiVM and LLR calculation. Based on these last methods, it was possible to
identify patterns of spectral changes and their spatial arrangement in the imaged patches. These
patterns were related to disease symptoms and their spatial distribution observed in ground-based,
very-high resolution imagery. In this regard, the main detectable changes observed by UAV imagery
concerned canopy and leaf structural traits, and to a minor degree, pigment content also at leaf
and canopy levels. These facts indicate that late blight detection and severity assessment based on
UAV imagery in the optical domain with sub-decimeter resolution may rely in particular on the
identification of affected areas characterized by reduction in leaf and canopy structure. Relationship
with leaf and canopy pigment content was less perceptible than changes related to structural traits.
These observations are of interest if one intends to develop a specific framework for late blight
detection and severity assessment based on optical imagery acquired at canopy level. In this case,
including off-nadir spectral data and reflectance measurements in wavelengths in other spectral
regions (i.e., green region), besides red and near-infrared, may increase sensitivity of the approach
used, in particular concerning detection of changes in pigment content, considering the saturation effect
Remote Sens. 2019, 11, 224 28 of 47

normally observed in reflectance measurements in the red region under relatively high chlorophyll
content levels.

Author Contributions: Conceptualization, M.H.D.F., H.B., D.F.A., L.K.; methodology, M.H.D.F., H.B., D.F.A.,
J.S., L.K.; formal analysis M.H.D.F., H.B., D.F.A., J.S., L.K.; data curation, M.H.D.F.; validation, M.H.D.F.;
writing—original draft preparation, M.H.D.F.; writing—review and editing, H.B., D.F.A., J.S., L.K.; visualization,
M.H.D.F.; supervision, L.K.
Funding: This research was partially funded through a scholarship to the first author conceded by CAPES—
Brazilian Federal Agency for Support and Evaluation of Graduate Education (Project No. 13647-13-0), within the
Ministry of Education of Brazil.
Acknowledgments: We would like to thank Jan Jansen for his help with data collection during the field campaign.
Also, we are thankful for the careful evaluation and insightful suggestions given by the anonymous reviewers,
which contributed substantially to the improvement of the content presented in this manuscript.
Conflicts of Interest: The authors declare no conflict of interest.

Appendix A

Table A1. Band-to-band registration accuracy (RMSE in pixels) within and between spectral band
subsets for ground-based images. Results are summarized for each acquisition date indicating
average (Avrg.) and range of RMSE values observed for the images acquired in a given date (n = 8
sampling units).

Regist. Subset 2 1 Subset 2 1–2 Subset 2 2 Subset 2 2–3 Subset 2 3


Method 1 Avrg. Range Avrg. Range Avrg. Range Avrg. Range Avrg. Range
37 DAP
Raw 4.50 0.67–12.73 1.51 0.83–2.62 2.12 0.77–5.93 2.44 0.72–5.41 3.39 0.57–16.99
I 1.16 0.58–2.98 0.91 0.78–1.05 0.94 0.60–3.31 0.85 0.77–0.95 0.93 0.55–1.77
II 0.87 0.51–2.10 0.76 0.55–1.39 0.87 0.55–3.38 0.63 0.56–0.75 0.72 0.51–1.81
50 DAP
Raw 7.23 0.61–27.08 2.19 0.95–4.62 2.90 0.92–18.30 2.17 1.00–3.47 5.14 0.89–18.03
0.85
I 1.67 0.55–4.01 0.96 1.14 0.78–2.44 1.02 0.88–1.30 1.24 0.65–2.51
–1.27
II 0.97 0.53–2.27 0.73 0.58–1.08 1.01 0.71–2.97 0.74 0.60–1.06 0.73 0.56–1.24
64 DAP
Raw 4.94 0.52–18.32 1.99 0.72–3.29 2.29 0.70–7.53 1.91 1.14–3.28 3.33 0.61–10.57
I 1.46 0.57–2.66 1.00 0.83–1.26 1.15 0.68–1.96 0.95 0.82–1.26 1.00 0.64–1.99
II 1.10 0.54–2.18 1.37 0.67–2.17 1.34 0.66–3.40 0.68 0.59–0.84 0.76 0.58–1.28
78 DAP
Raw 8.24 0.72–30.01 1.05 0.58–1.74 2.21 0.66–5.31 2.18 0.87–3.12 5.59 0.60–24.51
I 2.07 0.58–4.56 0.98 0.67–1.22 1.30 0.68–3.52 1.07 0.90–1.28 1.52 0.65–2.97
II 1.21 0.59–2.21 0.89 0.66–1.41 1.13 0.69–2.10 0.80 0.61–1.08 0.82 0.57–1.63
1 –Registration (Regist.) methods: without registration (Raw); affine (I); displacement field (II). 2 –Subset 1 comprises

12 bands, between 503 and 660 nm; subset 2 is six bands, between 672 and 750; and subset 3 is ten bands, from 763
to 893 nm.
Remote Sens. 2019, 11, 224 29 of 47

Table A2. Vegetation indices used in this study.

Vegetation Index (VI) Acq. Sensitivity


Formulation 2 (Scale) 4 Ref. 5
Name Acron. 1 Level 3
 
1 1
Anthocyanin Reflectance Index ARI − R G ant (L) [80,81]
 R550 R700  770
1 1
Carotenoids Index green Carg − R G car (L) [80,82]
 R515 R565  770
1 1
Car red edge Carre − R770 G car (L) [80,82]
R515 R700
R780
Chlorophyll Index green CIg −1 G chl (L) [80,82]
R550
R780
CI red edge CIre −1 A, G chl (L) [80,82]
R710
R870 /R550
Chlorophyll Vegetation Index CVI G chl (L) [83]
R670 /R550
Difference Vegetation index DVI R800 −R680 A, G chl (L) [84]
Double Difference Index DD (R749 −R720 ) − (R701 −R672 ) A, G chl (L) [85]
R554 chl, LAI, chl
Greenness Index GI G [86]
R677 x LAI (L, C)
R875 −R560 R800 −R550 R750 −R550 chl, LAI, chl
Green Normalized Difference Vegetation Index GNDVI1 to 3 ; ; G [87]
R875 +R560 R800 +R550 R750 +R550 x LAI (L, C)
R682 −R553 chl, LAI, chl
Greenness Vegetation Index GVI G [88]
R682 +R553 x LAI (L, C)
R800 −R680 chl, LAI, chl
Lichtenthaler Index LIC A, G [89]
R800 +R680   x LAI (L, C)
Modified Chlorophyll Absorption in R700
MCARI [(R700 −R670 )−0.2(R700 −R550 )] G chl (L) [90]
Reflectance Index  R670 
R750
MCARI red edge MCARIre [(R750 −R705 )−0.2(R750 −R550 )] G chl (L) [91]
R705
1.5[2.5(R800 −R670 )−1.3(R800 −R550 )]
– MCARI2 q √ G LAI (C) [92]
(2R800 +1)2 − 6R800 −5 R670 −0.5

 
R700
[(R700 −R670 )−0.2(R700 −R550 )]
MCARI/ R670
– G chl (L) [90]
OSAVI (R800 −R670 )
(1 + 0.16)
(R800 +R670 +0.16) 
R750
[(R750 −R705 )−0.2(R750 −R550 )]
MCARI/ R705
MCARI/OSAVI red edge G chl (L) [91]
OSAVIre (R750 −R705 )
(1 + 0.16)
(R750 +R705 +0.16)
Remote Sens. 2019, 11, 224 30 of 47

Table A2. Cont.

Vegetation Index (VI) Acq. Sensitivity


Formulation 2 (Scale) 4 Ref. 5
Name Acron. 1 Level 3

R780 −R710
– Maccioni A, G chl (L) [93]
R780 −R680 
R800 0.5 R750 0.5
      
R800 R750
Modified Simple Ratio MSR1 and 2 −1 / +1; −1 / +1 A, G chl (L) [91,94]
R670 R670 R705 R705
R754 −R709 chl, LAI, chl
MERIS Terrestrial Chlorophyll Index MTCI A, G [95]
R709 −R681 x LAI (L, C)
chl, LAI, chl
Modified Triangular Vegetation Index MTVI 1.2[1.2(R800 −R550 )−2.5(R670 −R550 )] G [92]
x LAI (L, C)
R790 −R720
Normalized Difference Red Edge Index NDRE A, G chl (L) [96]
R790 +R720
R800 −R670 chl, LAI, chl
Normalized Difference Vegetation Index NDVI A, G [97]
R800 +R670 x LAI (L, C)
R750 −R705 chl, LAI, chl
NDVI red edge NDVIre A, G [98]
R750 +R705 x LAI (L, C)
R2800 −R670
– NDVI * SR A, G LAI (C) [99]
R800 +R2670
(R800 −R670 ) chl, LAI, chl
Optimized Soil Adjusted Vegetation Index OSAVI (1 + 0.16) A, G [100]
(R800 +R670 +0.16) x LAI (L, C)
(R750 −R705 ) chl, LAI, chl
OSAVI red edge OSAVIre (1 + 0.16) A, G [91]
(R750 +R705 +0.16) x LAI (L, C)
xan, car,
R570 −R531
Photochemical Reflectance Index PRI G car/chl, LAI [101]
R570 +R531
(L, C)
R800 −R635 chl, LAI, chl
Pigment Specific Normalized Difference PSND A, G [102]
R800 +R635 x LAI (L, C)
(R680 −R500 ) chl, car,
Plant Senescence Reflectance Index PSRI G [103]
R750 car/chl (L)
R800 R800
Pigment Specific Simple Ratio PSSR1 and 2 ; A, G chl (L) [102]
R650 R635
R800
– PSSR3 G car (L) [102]
R500
R675 R675
Ratio Analysis of Reflectance Spectra RARS1 and 2 ; A, G chl (L) [104]
R700 (R650 × R700 )
R760
– RARS3 G car (L) [104]
R500
R800 −R670 chl, LAI, chl
Renormalized Difference Vegetation Index RDVI A, G [105]
(R800 +R670 )2 x LAI (L, C)
Remote Sens. 2019, 11, 224 31 of 47

Table A2. Cont.

Vegetation Index (VI) Acq. Sensitivity


Formulation 2 (Scale) 4 Ref. 5
Name Acron. 1 Level 3

[(R670 +R780 )/2]−R700 chl, LAI, chl


Red Edge Position REP 700 + 40 A, G [106]
R740 −R700 x LAI (L, C)
R690
Red Green Index RGI G car (L) [86]
R550
R800 −R450
Structure Insensitive Pigment Index SIPI G chl (L) [107]
R800 +R650
R752 [98,
Simple Ratio SR1 A, G chl (L)
R690 108]
[84,98,
R800 R750 R750 R700 R690
– SR2 to 6 ; ; ; ; G chl (L) 108–
R675 R700 R550 R670 R655
   110]
Transformed Chlorophyll Absorption Ratio R700
TCARI 3 (R700 −R670 )−0.2(R700 −R550 ) G chl (L) [111]
Index   R670 
R750
TCARI red edge TCARIre 3 (R750 −R705 )−0.2(R705 −R550 ) G chl (L) [91]
 R
 705 
R700
3 (R700 −R670 )−0.2(R700 −R550 )
TCARI/ R670
– G chl (L) [111]
OSAVI (R800 −R670 )
(1 + 0.16)
 ( R 800 +R670 +0.16) 
R750
3 (R750 −R705 )−0.2(R750 −R550 )
TCARI/ R705
TCARI/OSAVI red edge G chl (L) [91]
OSAVIre (R750 −R705 )
(1 + 0.16)
(R750 +R705 +0.16 √)
Triangular Chlorophyll Index TCI 1.2(R700 /R550 )−1.5(R670 /R550 ) × √R700 /R670 G chl (L) [112]
1.2(R700 /R550 )−1.5(R670 /R550 ) × R700 /R670
– TCI/OSAVI (R800 −R670 ) G chl (L) [112]
(1 + 0.16)
(R800 +R670 +0.16)
chl, LAI, chl
Triangular Vegetation Index TVI 0.5[120(R750 −R550 )−200(R670 −R550 )] G [113]
x LAI (L, C)
R870 − (C × R670 );
Weighted Difference Vegetation Index WDVI RSoil870 A, G LAI (C) [114]
C=
RSoil670
1 Acron. = Acronyms for VIs names; 2 Rw = reflectance in the spectral band centered in w, RSoilw = reflectance of bare soil in the spectral band centered in w. 3 Acquisition (Acq.) level of
the data used for calculation: airborne (A) or ground-based (G); 4 chl = leaf chlorophylls content, LAI = leaf area index, chl x LAI = canopy chlorophylls content, xan = xantophylls, car =
carotenoids, car/chl = ratio between carotenoids and chlorophylls, L = leaf scale, C = canopy scale; 5 References in the literature (Ref.) for the VIs formulations.
Remote Sens. 2019, 11, 224 32 of 47

Table A3. Overview of vegetation segmentation procedure for ground-based images.

Pixels Labelled as Vegetation in the


Ground Cover Estimates after Image Clustering
Training Data (%)
Retained Vegetation Indices (Table A2)
DAP Used for Binary Classification after Root Mean Squared Error
Percentile (%) of Probability
Calibration Dataset Test Dataset Regularization Estimate (for Cluster-Wise (% of Ground Cover)
Class Assignment) Calibration Dataset Test Dataset
CVI, PSSR1, PSSR2, PSSR3, RARS3, REP,
37 29.1 20.7 37.5 2.07 2.56
SR1, SR3, TVI
CVI, PSSR1, PSSR2, PSSR3, RARS2,
50 73.0 64.8 59.0 2.41 2.09
RARS3, REP, SR1, SR3, TVI
64 67.1 92.0 CVI, PSSR1, RARS2, RDVI, REP, SR1, TVI 60.0 3.51 2.48
CVI, MTCI, PSSR1, PSSR2, PSSR3, RARS2,
78 82.1 81.8 85.0 1.56 2.95
RARS3, REP, SR1, TVI
Remote Sens. 2019, 11, 224 33 of 47
Remote Sens. 2018, 10, x FOR PEER REVIEW 33 of 46

Figure A1. RMSE


Figure A1. RMSE for
for ground
groundcover
coverretrieval
retrievalatatSU SUlevel
levelbybyapplying
applyingvegetation index
vegetation index(VI) threshold
(VI) to
threshold
the UAV images. Values after VI labels in the graph indicate the median RMSE for the
to the UAV images. Values after VI labels in the graph indicate the median RMSE for the respectiverespective index,
considering the validation
index, considering datasetdataset
the validation (i.e., four sampling
(i.e., units per
four sampling acquisition
units date). date).
per acquisition
Table A4. Vegetation indices (VIs; Table A2) thresholds obtained after optimization for background
Table A4.
removal inVegetation
UAV images.indices (VIs; Table
Different valuesA2) thresholds
were derived obtained after optimization
for each acquisition date infor background
order to adapt
removal in UAV
background images.
removal Different
to crop valuesand
development were derived forconditions.
measurement each acquisition
Resultsdate in orderfollowing
are ordered to adapt
background removal to crop development and
the segmentation performance presented in Figure A1. measurement conditions. Results are ordered
following the segmentation performance presented in Figure A1.
VI 37 DAP 50 DAP 64 DAP 78 DAP
VI 37 DAP 50 DAP 64 DAP 78 DAP
NDVI*SR 0.064 0.064
NDVI*SR 0.287 0.287 0.204
0.204 0.192
0.192
OSAVI
OSAVI 0.493
0.493 0.679
0.679 0.647
0.647 0.623
0.623
DD 0.057 0.107 0.085 0.067
DD 0.057 0.107 0.085 0.067
DVI 0.202 0.342 0.259 0.269
DVI WDVI 0.202 0.182 0.342 0.340 0.259
0.252 0.269
0.253
WDVI NDVI 0.182 0.620 0.340 0.779 0.252
0.828 0.253
0.783
NDVI PSSR1 0.620 4.269 0.779 8.088 10.630
0.828 8.245
0.783
PSSR1 MSR1 4.269 2.582 8.088 3.490 3.952
10.630 3.520
8.245
MSR1 PSND 2.582 0.616 3.490 0.764 0.802
3.952 0.738
3.520
PSSR2 4.214 7.491 9.114 6.637
PSND 0.616 0.764 0.802 0.738
SR1 3.384 5.893 7.863 6.186
PSSR2 LIC 4.214 0.563 7.491 0.722 9.114
0.783 6.637
0.730
SR1 OSAVIre 3.384 0.247 5.893 0.393 7.863
0.364 6.186
0.288
LIC MTCI 0.563 2.234 0.722 2.640 0.7832.348 1.414
0.730
OSAVIre REP 0.247 712.885 0.393719.069 720.956
0.364 718.168
0.288
MTCI Maccioni 2.234 0.717 2.640 0.740 0.712
2.348 0.601
1.414
RARS1 0.826 0.794 0.784 0.782
REP 712.885 719.069 720.956 718.168
CIre 0.925 1.673 1.694 1.052
Maccioni 0.717 0.740 0.712 0.601
NDRE 0.316 0.455 0.458 0.345
RARS1 MSR2 0.826 1.621 0.794 1.967 0.784
1.989 0.782
1.701
CIre NDVIre 0.925 0.296 1.673 0.435 1.694
0.443 1.052
0.331
NDRE RDVI 0.316 1.410 0.455 1.803 2.598
0.458 2.111
0.345
MSR2 RARS2 1.621 10.172 1.96716.311 29.685
1.989 20.889
1.701
NDVIre 0.296 0.435 0.443 0.331
RDVI 1.410 1.803 2.598 2.111
RARS2 10.172 16.311 29.685 20.889
Remote Sens. 2019, 11, 224 34 of 47
Remote Sens. 2018, 10, x FOR PEER REVIEW 34 of 46

(a) (b) (c)

(d) (e) (f)

FigureA2.
Figure Archetypes
A2.Archetypes derived
derived forfor ground-based
ground-based (b)(b) and
and UAV(e)(e)images
UAV imagesacquired
acquired7878DAPDAP(color
(color
coded from green to red according to average reflectance in the NIR). In the same
coded from green to red according to average reflectance in the NIR). In the same graphs (a,e) spectragraphs (a,e) spectra
fortwo
for two pixels
pixels selected
selected from
froma aUAVUAV image
imagepatch corresponding
patch correspondingto T2 to
(mixed system)system)
T2 (mixed are also are
described
also
described (green and red dashed lines with dots). The weighting for reconstruction of these spectraon
(green and red dashed lines with dots). The weighting for reconstruction of these spectra based
the archetypes
based are described
on the archetypes in the radar
are described plots
in the radarforplots
the ground-based (c) and UAV
for the ground-based (f) UAV
(c) and data. (f)
Thedata.
areas
The areas (green and red squares) corresponding to the selected UAV pixels (d) in the ground-based(a)
(green and red squares) corresponding to the selected UAV pixels (d) in the ground-based image
had their
image spectra
(a) had and weights
their spectra extracted
and weights and averaged
extracted to represent
and averaged comparable
to represent information
comparable to that
information
toobtained for UAV
that obtained fordata.
UAVColors
data. on (a) and
Colors on (d)
(a) indicate
and (d) values
indicateof values
OSAVI of (Vegetation Index, TableIndex,
OSAVI (Vegetation A2; VI)
for the segmented vegetation.
Table A2; VI) for the segmented vegetation.

Table A5. Distribution of scores given at sampling unit level during disease assessment. Number of
sampling units (SUs) assigned to a given class are indicated for each acquisition date, together with
the number of imaged SUs using the spectral sensor in handheld mode (in parentheses). Numbers in
red indicate SUs not used for evaluating the effects of different disease severity classes on the crop
spectral response (see section 2.8).

Disease severity class (%)


Treat.1

0 ≤1 ≤ 2.5 ≤5 ≤7 ≤ 10 ≤ 15 ≤ 25 ≤ 50 ≤ 75 ≤ 90 ≤ 97.5 > 97.5


37 DAP

I 20 (4) – – – – – – – – – – – –

II 20 (4) – – – – – – – – – – – –

50 DAP

I 20 (4) – – – – – – – – – – – –
II 20 (4) – – – – – – – – – – – –
64 DAP

I 28 (3) 16 (1) – – – – – – – – – – –
Remote Sens. 2019, 11, 224 35 of 47

Table A5. Distribution of scores given at sampling unit level during disease assessment. Number of
sampling units (SUs) assigned to a given class are indicated for each acquisition date, together with the
number of imaged SUs using the spectral sensor in handheld mode (in parentheses). Numbers in red
indicate SUs not used for evaluating the effects of different disease severity classes on the crop spectral
response (see Section 2.8).

1
Disease Severity Class (%)
Treat.
0 ≤1 ≤2.5 ≤5 ≤7 ≤10 ≤15 ≤25 ≤50 ≤75 ≤90 ≤97.5 >97.5
37 DAP
20
I – – – – – – – – – – – –
(4)
20
II – – – – – – – – – – – –
(4)
50 DAP
20
I – – – – – – – – – – – –
(4)
20
II – – – – – – – – – – – –
(4)
64 DAP
28 16
I – – – – – – – – – – –
(3) (1)
35
II 9 – – – – – – – – – – –
(4)
70 DAP
I 9 32 3 – – – – – – – – – –
II 32 12 – – – – – – – – – – –
73 DAP
I – – 14 11 9 6 – – – – – – –
II 5 14 16 5 – – – – – – – – –
78 DAP
16 11 14
I – – – 2 – 1 – – – –
(1) (1) (2)
8 24 8
II 2 2 – – – – – – – –
(1) (1) (2)
83 DAP
I – – – – – – 2 2 9 11 19 1 –
II – – 1 1 1 4 9 13 12 3 – – –
86 DAP
I – – – – – – – – – – 8 29 –
II – – – – – – – 1 39 – – – –
1 Treat. I—plots with one single cultivar (“non-mixed”); Treat. II—plots with a mix of cultivars (“mixed”).
Remote Sens. 2019, 11, 224 36 of 47

Table A6. C-statistic for pixel-wise binary classification according to T1 (“non-mixed” system) or T2
(“mixed” system) in each acquisition date using vegetation indices (VIs; Table A2) as independent
variables. Only results concerning UAV-derived spectra and sampling units selected for validation of
the logistic regressions are reported. Results are ordered (parentheses) according to values of C-statistic
for the last acquisition (78 DAP).

VI 37 DAP 50 DAP 64 DAP 78 DAP


Group I—VIs optimized to estimate leaf chlorophyll content
MSR2 0.642 (7) 0.516 (12) 0.601 (4) 0.761 (1)
SR1 0.716 (3) 0.528 (9) 0.605 (1) 0.756 (2)
CIre 0.62 (9) 0.523 (10) 0.604 (3) 0.752 (3)
NDRE 0.62 (10) 0.523 (11) 0.604 (2) 0.752 (4)
PSSR1 0.712 (5) 0.555 (7) 0.598 (5) 0.739 (6)
MSR1 0.712 (4) 0.555 (6) 0.598 (6) 0.739 (5)
MAC 0.565 (12) 0.564 (5) 0.588 (7) 0.728 (7)
PSSR2 0.752 (2) 0.549 (8) 0.559 (8) 0.713 (8)
DD 0.596 (11) 0.567 (4) 0.507 (11) 0.617 (9)
DVI 0.641 (8) 0.571 (3) 0.497 (12) 0.608 (10)
RARS2 0.803 (1) 0.584 (2) 0.553 (9) 0.604 (11)
RARS1 0.67 (6) 0.608 (1) 0.52 (10) 0.527 (12)
Group II—VIs optimized to estimate canopy traits
REP 0.693 (5) 0.513 (11) 0.632 (1) 0.767 (1)
NDVIre 0.642 (6) 0.516 (10) 0.601 (3) 0.761 (2)
LIC 0.71 (4) 0.525 (9) 0.61 (2) 0.755 (3)
OSAVIre 0.533 (10) 0.533 (8) 0.565 (6) 0.749 (4)
MTCI 0.529 (11) 0.558 (5) 0.584 (5) 0.739 (5)
NDVI 0.712 (3) 0.555 (6) 0.598 (4) 0.739 (6)
PSND 0.752 (2) 0.549 (7) 0.559 (7) 0.713 (7)
OSAVI 0.545 (9) 0.573 (4) 0.515 (9) 0.654 (8)
NDVI*SR 0.576 (8) 0.574 (1) 0.495 (11) 0.619 (9)
WDVI 0.62 (7) 0.573 (3) 0.508 (10) 0.614 (10)
RDVI 0.789 (1) 0.573 (2) 0.517 (8) 0.538 (11)

Table A7. Kendall-tau correlation coefficients between disease severity classes (as ordinal variable)
and median of vegetation indices (VIs; as continuous variable) for UAV data at sampling unit level for
assessment made 64 and 78 DAP.

Dataset CIre 1 REP 1 WDVI 1


64 DAP 2
All pixels −0.172 −0.151 −0.132
Upper 20th percentile of VI values −0.147 −0.144 −0.071
Upper 10th percentile of VI values −0.147 −0.141 −0.098
78 DAP 2
All pixels −0.534 *** −0.549 *** −0.479 ***
Upper 20th percentile of VI values −0.509 *** −0.518 *** −0.461 ***
Upper 10th percentile of VI values −0.486 *** −0.499 *** −0.464 ***
1Significant at 0.05 (*), 0.01 (**) or 0.001 (***) level; 2 only observations from T1 considered for 64 DAP while data
corresponding to both treatments were used for 78 DAP.
PSND 0.752 (2) 0.549 (7) 0.559 (7) 0.713 (7)
OSAVI 0.545 (9) 0.573 (4) 0.515 (9) 0.654 (8)
NDVI*SR 0.576 (8) 0.574 (1) 0.495 (11) 0.619 (9)
WDVI 0.62 (7) 0.573 (3) 0.508 (10) 0.614 (10)
Remote Sens. 2019, 11, 224 37 of 47
RDVI 0.789 (1) 0.573 (2) 0.517 (8) 0.538 (11)

(a)

(b)

(c)

(d)

(e)

Figure A3. Linear regression (fitted by ordinary least squares) between crop traits and vegetation
indices (VIs = CIre, REP, WDVI, and OSAVI; a–d) and between WDVI and other VIs (e). Prediction
and confidence intervals (95%) are presented in blue dashed lines. Colors from green to red indicate
time of acquisition (from 37 to 78 DAS). Dots and triangles correspond to the non-mixed and mixed
cropping system, respectively. Only the last three acquisitions are taken into account for evaluating
the relationship between traits and VIS (a–d), all the data are considered for the comparison between
WDVI and other VIs (e).
green to red indicate time of acquisition (from 37 to 78 DAS). Dots and triangles correspond
to the non-mixed and mixed cropping system, respectively. Only the last three acquisitions
are taken into account for evaluating the relationship between traits and VIS (a–d), all the
data are considered for the comparison between WDVI and other VIs (e).
Remote Sens. 2019, 11, 224 38 of 47

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Figure A4. Ground-based (a,c,e,g) and UAV (b,d,f,h) imagery for SUs over the growing season. SUs
cultivated with T1 (“non-mixed”) are represented in red frames and images corresponding to T2
(“mixed”) in black frames. False color composites (828, 660, and 607 nm as RGB for ground-based and
nearest bands for UAV images) are displayed on the background and foreground shows OSAVI (VI)
after vegetation segmentation. Scale bars (left upper corners) indicate 25 cm. For 64 DAP, frames in
dashed lines indicate SUs not measured during other acquisitions.
(“mixed”) in black frames. False color composites (828, 660, and 607 nm as RGB for ground-based and
nearest bands for UAV images) are displayed on the background and foreground shows OSAVI (VI)
after vegetation segmentation. Scale bars (left upper corners) indicate 25 cm. For 64 DAP, frames in
dashed lines indicate SUs not measured during other acquisitions.
Remote Sens. 2019, 11, 224 39 of 47

(a) (b)

Figure A5. Linear regression (fitted by ordinary least squares) between ground-based and UAV data
Figure A5. Linear regression (fitted by ordinary least squares) between ground-based and UAV data
(OSAVI, a; LLR, b) corresponding to the median values for eight SUs followed during the growing
(OSAVI, a; LLR, b) corresponding to the median values for eight SUs followed during the growing
season. Prediction and confidence intervals (95%) are presented in blue dashed lines. Red dashed line
season. Prediction and confidence intervals (95%) are presented in blue dashed lines. Red dashed line
indicate the 1:1 diagonal line. Dots correspond to the non-mixed treatment and triangles to the mixed
indicate the 1:1 diagonal line. Dots correspond to the non-mixed treatment and triangles to the mixed
cropping system. Colors from green to red indicate time of acquisition (from 37 to 78 DAS).
cropping system. Colors from green to red indicate time of acquisition (from 37 to 78 DAS).
Table A8. C-statistic for pixel-wise binary classification according to two specific disease severity (DS)
Table a7.
classes (DSKendall-tau
between 2.5correlation coefficients
and 5.0% and between between diseaseinseverity
10.0 and 15.0%) contrastclasses (as ordinal
to a healthier variable)
reference (DS
and median of vegetation indices (VIs; as continuous variable) for UAV data at sampling unit
up to 1.0%), for the last acquisition date (78 DAP). Only results concerning UAV-acquired spectra and level
for assessment
sampling units made
(SUs) 64 and 78for
selected DAP.
validation of the classification approach are reported. Results are
ordered (parentheses) according to values of C-statistic for all pixels within the SUs considered. Results
CIre1 REP1 WDVI1
concerning the selection of pixels within the upper 20th and 10th percentiles of log-likelihood ratio
Dataset
(LLR) values indicating association with a given DS class are 64 also
DAPpresented.
2

Vegetation All Data 20th Percentile of LLR 10th Percentile of LLR


All pixels -0.172 -0.151 -0.132
Index DS ≤ 5.0% DS ≤ 15.0% DS ≤ 5.0% DS ≤ 15.0% DS ≤ 5.0% DS ≤ 15.0%
Upper 20th percentile Group I—leaf chlorophyll content related
-0.147 -0.144 -0.071
MSR2 of VI
0.583 (5)values 0.839 (1) 0.550 (7) 0.936 (1) 0.568 (4) 0.944 (3)
NDRE 0.546 10
Upper (10) 0.828 (2)
th percentile 0.450 (12) 0.928 (3) 0.431 (12) 0.948 (1)
CIre 0.552 (9) 0.828 (3) -0.147
0.466 (11) -0.141
0.933 (2) -0.098(11)
0.433 0.945 (2)
of VI values
MAC 0.598 (3) 0.802 (4) 0.540 (9) 0.912 (4) 0.441 (10) 0.943 (4)
SR1 0.574 (6) 0.802 (5) 0.583 (5) 0.8442 (5)
78 DAP 0.529 (7) 0.786 (5)
PSSR1 0.574 (7) 0.771 (6) 0.568 (6) 0.803 (8) 0.521 (8) 0.746 (6)
MSR1 0.572
All(8)pixels 0.769 (7) -0.534*** 0.584 (4) 0.820 (6)
-0.549*** 0.513 (9)
-0.479*** 0.744 (7)
DD 0.609 (2) 0.756 (8) 0.693 (2) 0.703 (10) 0.616 (2) 0.722 (9)
DVI 0.62220
Upper (1)th percentile
0.726 (9) 0.715 (1) 0.806 (7) 0.641 (1) 0.562 (12)
PSSR2 0.494 (11) 0.72 (10) -0.509***
0.548 (8) -0.518***
0.763 (9) -0.461***
0.534 (6) 0.733 (8)
of VI values
RARS2 0.592 (4) 0.509 (11) 0.654 (3) 0.572 (11) 0.583 (3) 0.669 (10)
RARS1 Upper
0.482 10 th percentile
(12) 0.484 (12) 0.537 (10) 0.556 (12) 0.546 (5) 0.603 (11)
-0.486*** -0.499*** -0.464***
of VI values
Group II—canopy traits related
1 Significant at 0.05 (*), 0.01(**) or 0.001(***) level; 2 only observations from T1 considered for 64 DAP while data
OSAVIre 0.495 (10) 0.849 (1) 0.567 (8) 0.971 (1) 0.503 (11) 0.962 (1)
REP 0.518 (9) corresponding
0.837 (2) to both treatments
0.532 (11) were used for(4)
0.901 78 DAP. 0.518 (8) 0.883 (4)
NDVIre 0.582 (6) 0.837 (3) 0.556 (9) 0.939 (2) 0.548 (7) 0.952 (2)
MTCI 0.637 (1) 0.812 (4) 0.606 (5) 0.929 (3) 0.576 (5) 0.942 (3)
LIC 0.580 (7) 0.804 (5) 0.598 (6) 0.855 (5) 0.572 (6) 0.804 (5)
NDVI 0.572 (8) 0.768 (6) 0.574 (7) 0.806 (6) 0.508 (10) 0.742 (6)
OSAVI 0.615 (4) 0.752 (7) 0.692 (4) 0.752 (8) 0.656 (2) 0.631 (8)
WDVI 0.627 (2) 0.747 (8) 0.746 (1) 0.722 (9) 0.708 (1) 0.589 (9)
NDVI*SR 0.614 (5) 0.732 (9) 0.700 (3) 0.721 (10) 0.649 (3) 0.580 (10)
PSND 0.484 (11) 0.721 (10) 0.547 (10) 0.768 (7) 0.515 (9) 0.730 (7)
RDVI 0.615 (3) 0.669 (11) 0.719 (2) 0.609 (11) 0.632 (4) 0.517 (11)
Remote
Remote Sens. 2018, 11,
Sens. 2019, 10, 224
x FOR PEER REVIEW 40 of
40 of 47
46

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Figure A6. Log-likelihood ratio (LLR) for ground-based (a,c,e,g) and UAV imagery (b,d,f,h). LLR, in
Figure A6. Log-likelihood ratio (LLR) for ground-based (a,c,e,g) and UAV imagery (b,d,f,h). LLR, in
this case, indicates the comparison of pixel-wise probability estimated for T1 (H1; “non-mixed” system)
this case, indicates the comparison of pixel-wise probability estimated for T1 (H1; “non-mixed”
in contrast to T2 (H0; “mixed” system). SUs cultivated with T1 are represented with red frames and
system) in contrast to T2 (H0; “mixed” system). SUs cultivated with T1 are represented with red
scale bars in the left upper corner of each image correspond to 25 cm. For 64 DAP, frames represented
frames and scale bars in the left upper corner of each image correspond to 25 cm. For 64 DAP, frames
in dashed lines indicate SUs not measured during other acquisitions and which cannot be compared
represented in dashed lines indicate SUs not measured during other acquisitions and which cannot
over time.
be compared over time.
Remote
RemoteSens. 2018,11,
Sens.2019, 10,224
x FOR PEER REVIEW 41 of
41 of 47
46

(b)

(a) (c)

Figure A7. Imaged


Figure A7. Imaged patch
patch relatively
relatively highly
highly affected
affected by
by late
late blight
blight 78
78 DAP
DAP (i.e.,
(i.e., first
first sampling
sampling unit
unit
represented in Figure A6g). Image (a,b) corresponds to false color composite for ground image after
represented in Figure A6g). Image (a,b) corresponds to false color composite for ground image after
background removal (620, 542, and 503 nm as RGB bands) and image (c) indicates log-likelihood ratio
background removal (620, 542, and 503 nm as RGB bands) and image (c) indicates log-likelihood ratio
for pixels in the highlighted area in (a, red square), also depicted in (b).
for pixels in the highlighted area in (a, red square), also depicted in (b).

References
1. Mulla, D.J. Twenty five years of remote sensing in precision agriculture: Key advances and remaining
References
knowledge gaps. Biosyst. Eng. 2013, 114, 358–371. [CrossRef]
2. Kuska, M.T.; Mahlein, A.-K. Aiming at decision making in plant disease protection and phenotyping by the
1. Mulla, D.J. Twenty five years of remote sensing in precision agriculture: Key advances and remaining
use of optical sensors. Eur. J. Plant Pathol. 2018, 152, 987–992. [CrossRef]
knowledge gaps. Biosyst. Eng. 2013, 114, 358–371.
3. Aasen, H.; Bolten, A. Multi-temporal high-resolution imaging spectroscopy with hyperspectral 2D
2. Kuska, M.T.; Mahlein, A.-K. Aiming at decision making in plant disease protection and phenotyping by
imagers—From theory to application. Remote Sens. Environ. 2018, 205, 374–389. [CrossRef]
the use of optical sensors. Eur. J. Plant Pathol. 2018, 152, 987–992.
4. Capolupo, A.; Kooistra, L.; Berendonk, C.; Boccia, L.; Suomalainen, J. Estimating Plant Traits of Grasslands
3. Aasen, H.; Bolten, A. Multi-temporal high-resolution imaging spectroscopy with hyperspectral 2D
from UAV-Acquired Hyperspectral Images: A Comparison of Statistical Approaches. ISPRS Int. J. Geo-Inf.
imagers—From theory to application. Remote Sens. Environ. 2018, 205, 374–389.
2015, 4, 2792–2820. [CrossRef]
4. Capolupo, A.; Kooistra, L.; Berendonk, C.; Boccia, L.; Suomalainen, J. Estimating Plant Traits of Grasslands
5. Roosjen, P.P.J.; Brede, B.; Suomalainen, J.M.; Bartholomeus, H.M.; Kooistra, L.; Clevers, J.G.P.W. Improved
from UAV-Acquired Hyperspectral Images: A Comparison of Statistical Approaches. ISPRS Int. J. Geo-Inf.
estimation of leaf area index and leaf chlorophyll content of a potato crop using multi-angle spectral
2015, 4, 2792–2820.
data—potential of unmanned aerial vehicle imagery. Int. J. Appl. Earth Obs. Geoinf. 2018, 66, 14–26.
5. Roosjen, P.P.J.; Brede, B.; Suomalainen, J.M.; Bartholomeus, H.M.; Kooistra, L.; Clevers, J.G.P.W. Improved
[CrossRef]
estimation of leaf area index and leaf chlorophyll content of a potato crop using multi-angle spectral data—
6. Bock, C.H.; Poole, G.H.; Parker, P.E.; Gottwald, T.R. Plant Disease Severity Estimated Visually, by Digital
potential of unmanned aerial vehicle imagery. Int. J. Appl. Earth Obs. Geoinf. 2018, 66, 14–26.
Photography and Image Analysis, and by Hyperspectral Imaging. Crit. Rev. Plant Sci. 2010, 29, 59–107.
6. Bock, C.H.; Poole, G.H.; Parker, P.E.; Gottwald, T.R. Plant Disease Severity Estimated Visually, by Digital
[CrossRef]
Photography and Image Analysis, and by Hyperspectral Imaging. Crit. Rev. Plant Sci. 2010, 29, 59–107.
7. Kuska, M.T.; Brugger, A.; Thomas, S.; Wahabzada, M.; Kersting, K.; Oerke, E.-C.; Steiner, U.; Mahlein, A.-K.
7. Kuska, M.T.; Brugger, A.; Thomas, S.; Wahabzada, M.; Kersting, K.; Oerke, E.-C.; Steiner, U.; Mahlein, A.-
Spectral Patterns Reveal Early Resistance Reactions of Barley Against Blumeria graminis f. sp. hordei.
K. Spectral Patterns Reveal Early Resistance Reactions of Barley Against Blumeria graminis f. sp. hordei.
Phytopathology 2017, 107, 1388–1398. [CrossRef] [PubMed]
Phytopathology 2017, 107, 1388–1398.
8. Wahabzada, M.; Mahlein, A.-K.; Bauckhage, C.; Steiner, U.; Oerke, E.-C.; Kersting, K. Metro Maps of Plant
8. Wahabzada, M.; Mahlein, A.-K.; Bauckhage, C.; Steiner, U.; Oerke, E.-C.; Kersting, K. Metro Maps of Plant
Disease Dynamics—Automated Mining of Differences Using Hyperspectral Images. PLoS ONE 2015, 10,
Disease Dynamics—Automated Mining of Differences Using Hyperspectral Images. PLoS ONE 2015, 10,
e0116902. [CrossRef]
e0116902.
9. Wahabzada, M.; Mahlein, A.-K.; Bauckhage, C.; Steiner, U.; Oerke, E.-C.; Kersting, K. Plant Phenotyping
9. Wahabzada, M.; Mahlein, A.-K.; Bauckhage, C.; Steiner, U.; Oerke, E.-C.; Kersting, K. Plant Phenotyping
using Probabilistic Topic Models: Uncovering the Hyperspectral Language of Plants. Sci. Rep. 2016, 6,
using Probabilistic Topic Models: Uncovering the Hyperspectral Language of Plants. Sci. Rep. 2016, 6,
srep22482. [CrossRef]
srep22482.
10. Mahlein, A.-K.; Rumpf, T.; Welke, P.; Dehne, H.-W.; Plümer, L.; Steiner, U.; Oerke, E.-C. Development of
spectral indices for detecting and identifying plant diseases. Remote Sens. Environ. 2013, 128, 21–30.
Remote Sens. 2019, 11, 224 42 of 47

10. Mahlein, A.-K.; Rumpf, T.; Welke, P.; Dehne, H.-W.; Plümer, L.; Steiner, U.; Oerke, E.-C. Development
of spectral indices for detecting and identifying plant diseases. Remote Sens. Environ. 2013, 128, 21–30.
[CrossRef]
11. Thomas, S.; Behmann, J.; Steier, A.; Kraska, T.; Muller, O.; Rascher, U.; Mahlein, A.-K. Quantitative assessment
of disease severity and rating of barley cultivars based on hyperspectral imaging in a non-invasive,
automated phenotyping platform. Plant Methods 2018, 14, 45. [CrossRef] [PubMed]
12. Behmann, J.; Acebron, K.; Emin, D.; Bennertz, S.; Matsubara, S.; Thomas, S.; Bohnenkamp, D.; Kuska, M.T.;
Jussila, J.; Salo, H.; et al. Specim IQ: Evaluation of a New, Miniaturized Handheld Hyperspectral Camera
and Its Application for Plant Phenotyping and Disease Detection. Sensors 2018, 18, 441. [CrossRef] [PubMed]
13. Garcia-Ruiz, F.; Sankaran, S.; Maja, J.M.; Lee, W.S.; Rasmussen, J.; Ehsani, R. Comparison of two aerial
imaging platforms for identification of Huanglongbing-infected citrus trees. Comput. Electron. Agric. 2013,
91, 106–115. [CrossRef]
14. Calderón, R.; Navas-Cortés, J.; Zarco-Tejada, P. Early Detection and Quantification of Verticillium Wilt
in Olive Using Hyperspectral and Thermal Imagery over Large Areas. Remote Sens. 2015, 7, 5584–5610.
[CrossRef]
15. Thomas, S.; Kuska, M.T.; Bohnenkamp, D.; Brugger, A.; Alisaac, E.; Wahabzada, M.; Behmann, J.;
Mahlein, A.-K. Benefits of hyperspectral imaging for plant disease detection and plant protection: A
technical perspective. J. Plant Dis. Prot. 2017, 125, 5–20. [CrossRef]
16. Behmann, J.; Steinrücken, J.; Plümer, L. Detection of early plant stress responses in hyperspectral images.
ISPRS J. Photogramm. Remote Sens. 2014, 93, 98–111. [CrossRef]
17. Albetis, J.; Duthoit, S.; Guttler, F.; Jacquin, A.; Goulard, M.; Poilvé, H.; Féret, J.-B.; Dedieu, G. Detection of
Flavescence dorée Grapevine Disease Using Unmanned Aerial Vehicle (UAV) Multispectral Imagery. Remote
Sens. 2017, 9, 308. [CrossRef]
18. Calderón, R.; Navas-Cortés, J.A.; Lucena, C.; Zarco-Tejada, P.J. High-resolution airborne hyperspectral
and thermal imagery for early detection of Verticillium wilt of olive using fluorescence, temperature and
narrow-band spectral indices. Remote Sens. Environ. 2013, 139, 231–245. [CrossRef]
19. Li, X.; Lee, W.S.; Li, M.; Ehsani, R.; Mishra, A.R.; Yang, C.; Mangan, R.L. Spectral difference analysis and
airborne imaging classification for citrus greening infected trees. Comput. Electron. Agric. 2012, 83, 32–46.
[CrossRef]
20. López-López, M.; Calderón, R.; González-Dugo, V.; Zarco-Tejada, P.J.; Fereres, E. Early Detection and
Quantification of Almond Red Leaf Blotch Using High-Resolution Hyperspectral and Thermal Imagery.
Remote Sens. 2016, 8, 276. [CrossRef]
21. MacDonald, S.L.; Staid, M.; Staid, M.; Cooper, M.L. Remote hyperspectral imaging of grapevine
leafroll-associated virus 3 in cabernet sauvignon vineyards. Comput. Electron. Agric. 2016, 130, 109–117.
[CrossRef]
22. Zarco-Tejada, P.J.; Camino, C.; Beck, P.S.A.; Calderon, R.; Hornero, A.; Hernández-Clemente, R.;
Kattenborn, T.; Montes-Borrego, M.; Susca, L.; Morelli, M.; et al. Previsual symptoms of Xylella fastidiosa
infection revealed in spectral plant-trait alterations. Nat. Plants 2018, 4, 432–439. [CrossRef] [PubMed]
23. Huang, W.; Lamb, D.W.; Niu, Z.; Zhang, Y.; Liu, L.; Wang, J. Identification of yellow rust in wheat using
in-situ spectral reflectance measurements and airborne hyperspectral imaging. Precis. Agric. 2007, 8, 187–197.
[CrossRef]
24. Mewes, T.; Franke, J.; Menz, G. Spectral requirements on airborne hyperspectral remote sensing data for
wheat disease detection. Precis. Agric. 2011, 12, 795–812. [CrossRef]
25. Calderón, R.; Montes-Borrego, M.; Landa, B.B.; Navas-Cortés, J.A.; Zarco-Tejada, P.J. Detection of downy
mildew of opium poppy using high-resolution multi-spectral and thermal imagery acquired with an
unmanned aerial vehicle. Precis. Agric. 2014, 15, 639–661. [CrossRef]
26. Sankaran, S.; Maja, J.; Buchanon, S.; Ehsani, R. Huanglongbing (Citrus Greening) Detection Using Visible,
Near Infrared and Thermal Imaging Techniques. Sensors 2013, 13, 2117–2130. [CrossRef] [PubMed]
27. Jansen, M.; Bergsträsser, S.; Schmittgen, S.; Müller-Linow, M.; Rascher, U. Non-Invasive Spectral Phenotyping
Methods can Improve and Accelerate Cercospora Disease Scoring in Sugar Beet Breeding. Agriculture 2014,
4, 147–158. [CrossRef]
28. Polder, G.; van der Heijden, G.W.A.M.; van Doorn, J.; Baltissen, T.A.H.M.C. Automatic detection of tulip
breaking virus (TBV) in tulip fields using machine vision. Biosyst. Eng. 2014, 117, 35–42. [CrossRef]
Remote Sens. 2019, 11, 224 43 of 47

29. Whetton, R.L.; Hassall, K.L.; Waine, T.W.; Mouazen, A.M. Hyperspectral measurements of yellow rust and
fusarium head blight in cereal crops: Part 2: On-line field measurement. Biosyst. Eng. 2018, 166, 101–115.
[CrossRef]
30. Moshou, D.; Bravo, C.; Oberti, R.; West, J.S.; Ramon, H.; Vougioukas, S.; Bochtis, D. Intelligent multi-sensor
system for the detection and treatment of fungal diseases in arable crops. Biosyst. Eng. 2011, 108, 311–321.
[CrossRef]
31. Schaepman-Strub, G.; Schaepman, M.E.; Painter, T.H.; Dangel, S.; Martonchik, J.V. Reflectance quantities in
optical remote sensing—Definitions and case studies. Remote Sens. Environ. 2006, 103, 27–42. [CrossRef]
32. Herrmann, I.; Vosberg, S.K.; Ravindran, P.; Singh, A.; Chang, H.-X.; Chilvers, M.I.; Conley, S.P.; Townsend, P.A.
Leaf and Canopy Level Detection of Fusarium Virguliforme (Sudden Death Syndrome) in Soybean.
Remote Sens. 2018, 10, 426. [CrossRef]
33. Zhang, J.; Pu, R.; Huang, W.; Yuan, L.; Luo, J.; Wang, J. Using in-situ hyperspectral data for detecting and
discriminating yellow rust disease from nutrient stresses. Field Crop. Res. 2012, 134, 165–174. [CrossRef]
34. Yu, K.; Leufen, G.; Hunsche, M.; Noga, G.; Chen, X.; Bareth, G. Investigation of Leaf Diseases and Estimation
of Chlorophyll Concentration in Seven Barley Varieties Using Fluorescence and Hyperspectral Indices.
Remote Sens. 2013, 6, 64–86. [CrossRef]
35. Yang, C.-M. Assessment of the severity of bacterial leaf blight in rice using canopy hyperspectral reflectance.
Precis. Agric. 2009, 11, 61–81. [CrossRef]
36. Rumpf, T.; Mahlein, A.-K.; Steiner, U.; Oerke, E.-C.; Dehne, H.-W.; Plümer, L. Early detection and classification
of plant diseases with Support Vector Machines based on hyperspectral reflectance. Comput. Electron. Agric.
2010, 74, 91–99. [CrossRef]
37. Cooke, L.R.; Schepers, H.T.A.M.; Hermansen, A.; Bain, R.A.; Bradshaw, N.J.; Ritchie, F.; Shaw, D.S.;
Evenhuis, A.; Kessel, G.J.T.; Wander, J.G.N.; et al. Epidemiology and Integrated Control of Potato Late Blight
in Europe. Potato Res. 2011, 54, 183–222. [CrossRef]
38. Sugiura, R.; Tsuda, S.; Tamiya, S.; Itoh, A.; Nishiwaki, K.; Murakami, N.; Shibuya, Y.; Hirafuji, M.; Nuske, S.
Field phenotyping system for the assessment of potato late blight resistance using RGB imagery from an
unmanned aerial vehicle. Biosyst. Eng. 2016, 148, 1–10. [CrossRef]
39. Duarte-Carvajalino, J.; Alzate, D.; Ramirez, A.; Santa-Sepulveda, J.; Fajardo-Rojas, A.; Soto-Suárez, M.
Evaluating Late Blight Severity in Potato Crops Using Unmanned Aerial Vehicles and Machine Learning
Algorithms. Remote Sens. 2018, 10, 1513. [CrossRef]
40. Nebiker, S.; Lack, N.; Abächerli, M.; Läderach, S. Light-weight multispectral UAV sensors and their
capabilities for predicting grain yield and detecting plant diseases. ISPRS Int. Arch. Photogramm. Remote
Sens. Spat. Inf. Sci. 2016, XLI-B1, 963–970. [CrossRef]
41. Whitehead, K.; Hugenholtz, C.H.; Myshak, S.; Brown, O.; LeClair, A.; Tamminga, A.; Barchyn, T.E.;
Moorman, B.; Eaton, B. Remote sensing of the environment with small unmanned aircraft systems (UASs),
part 2: Scientific and commercial applications. J. Unmanned Veh. Sys. 2014, 2, 86–102. [CrossRef]
42. Franceschini, M.H.D.; Bartholomeus, H.; van Apeldoorn, D.; Suomalainen, J.; Kooistra, L. Intercomparison of
Unmanned Aerial Vehicle and Ground-Based Narrow Band Spectrometers Applied to Crop Trait Monitoring
in Organic Potato Production. Sensors 2017, 17, 1428. [CrossRef]
43. Franceschini, M.H.D.; Bartholomeus, H.; van Apeldoorn, D.; Suomalainen, J.; Kooistra, L. Assessing Changes
in Potato Canopy Caused by Late Blight in Organic Production Systems Through Uav-Based Pushbroom
Imaging Spectrometer. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 109–112. [CrossRef]
44. Römer, C.; Wahabzada, M.; Ballvora, A.; Pinto, F.; Rossini, M.; Panigada, C.; Behmann, J.; Léon, J.;
Thurau, C.; Bauckhage, C.; et al. Early drought stress detection in cereals: Simplex volume maximisation for
hyperspectral image analysis. Funct. Plant Biol. 2012, 39, 878. [CrossRef]
45. European and Mediterranean Plant Protection Organization Phytophthora infestans on potato. EPPO Bull.
2008, 38, 268–271. [CrossRef]
46. Honkavaara, E.; Saari, H.; Kaivosoja, J.; Pölönen, I.; Hakala, T.; Litkey, P.; Mäkynen, J.; Pesonen, L. Processing
and Assessment of Spectrometric, Stereoscopic Imagery Collected Using a Lightweight UAV Spectral Camera
for Precision Agriculture. Remote Sens. 2013, 5, 5006–5039. [CrossRef]
47. Clevers, J.G.P.W.; Kooistra, L. Using Hyperspectral Remote Sensing Data for Retrieving Canopy Chlorophyll
and Nitrogen Content. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 574–583. [CrossRef]
Remote Sens. 2019, 11, 224 44 of 47

48. Harwin, S.; Lucieer, A.; Osborn, J. The Impact of the Calibration Method on the Accuracy of Point Clouds
Derived Using Unmanned Aerial Vehicle Multi-View Stereopsis. Remote Sens. 2015, 7, 11933–11953.
[CrossRef]
49. Honkavaara, E.; Rosnell, T.; Oliveira, R.; Tommaselli, A. Band registration of tuneable frame format
hyperspectral UAV imagers in complex scenes. ISPRS J. Photogramm. Remote Sens. 2017, 134, 96–109.
[CrossRef]
50. Meier, U. Growth stages of mono- and dicotyledonous plants: BBCH Monograph. OpenAgrar Repository 2018.
[CrossRef]
51. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22,
1330–1334. [CrossRef]
52. Lowekamp, B.C.; Chen, D.T.; Ibanez, L.; Blezek, D. The Design of SimpleITK. Front. Neuroinform. 2013, 7, 45.
[CrossRef] [PubMed]
53. Vakalopoulou, M.; Karantzalos, K. Automatic Descriptor-Based Co-Registration of Frame Hyperspectral
Data. Remote Sens. 2014, 6, 3409–3426. [CrossRef]
54. Mattes, D.; Haynor, D.R.; Vesselle, H.; Lewellen, T.K.; Eubank, W. Nonrigid multimodality image registration.
Med. Imaging 2001, 1609–1620.
55. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110.
[CrossRef]
56. Muja, M.; Lowe, D.G. Fast approximate nearest neighbors with automatic algorithm configuration. VISAPP
2009, 2, 331–340.
57. Tibshirani, R.; Walther, G.; Hastie, T. Estimating the number of clusters in a data set via the gap statistic. J. R.
Stat. Soc. Ser. B (Stat. Methodol.) 2001, 63, 411–423. [CrossRef]
58. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B
(Stat. Methodol.) 2005, 67, 301–320. [CrossRef]
59. Jay, S.; Gorretta, N.; Morel, J.; Maupas, F.; Bendoula, R.; Rabatel, G.; Dutartre, D.; Comar, A.; Baret, F.
Estimating leaf chlorophyll content in sugar beet canopies using millimeter- to centimeter-scale reflectance
imagery. Remote Sens. Environ. 2017, 198, 173–186. [CrossRef]
60. Wood, C.W.; Reeves, D.W.; Duffield, R.R.; Edmisten, K.L. Field chlorophyll measurements for evaluation of
corn nitrogen status. J. Plant Nutr. 1992, 15, 487–500. [CrossRef]
61. Uddling, J.; Gelang-Alfredsson, J.; Piikki, K.; Pleijel, H. Evaluating the relationship between leaf chlorophyll
concentration and SPAD-502 chlorophyll meter readings. Photosynth. Res. 2007, 91, 37–46. [CrossRef]
[PubMed]
62. Clevers, J.G.P.W. Beyond NDVI: Extraction of Biophysical Variables from Remote Sensing Imagery. In Land
Use and Land Cover Mapping in Europe; Manakos, I., Braun, M., Eds.; Springer: Dordrecht, The Netherlands,
2014; Volume 18, pp. 363–381. ISBN 978-94-007-7968-6.
63. Yu, K.; Anderegg, J.; Mikaberidze, A.; Karisto, P.; Mascher, F.; McDonald, B.A.; Walter, A.; Hund, A.
Hyperspectral Canopy Sensing of Wheat Septoria Tritici Blotch Disease. Front. Plant Sci. 2018, 9, 1195.
[CrossRef] [PubMed]
64. Delalieux, S.; Auwerkerken, A.; Verstraeten, W.W.; Somers, B.; Valcke, R.; Lhermitte, S.; Keulemans, J.;
Coppin, P. Hyperspectral Reflectance and Fluorescence Imaging to Detect Scab Induced Stress in Apple
Leaves. Remote Sens. 2009, 1, 858–874. [CrossRef]
65. Gałecki, A.; Burzykowski, T. Linear Mixed-Effects Models Using R; Springer Texts in Statistics; Springer:
New York, NY, USA, 2013; ISBN 978-1-4614-3899-1.
66. Lenth, R. Emmeans: Estimated marginal means, aka least-squares means. R Package, v. 1, n. 2, 2018.
67. Pinheiro, J.; Bates, D.; Debroy, S.; Sakar, D. The R Core Team. nlme: Linear and Nonlinear Mixed Effects Models;
R Core Team: Vienna, Austria, 2016.
68. Thurau, C.; Kersting, K.; Bauckhage, C.; Augustin, F.I.S. Yes We Can—Simplex Volume Maximization
for Descriptive Web-Scale Matrix Factorization. In Proceedings of the Conference on Information and
Knowledge Management, Toronto, ON, Canada, 26–30 October 2010.
69. Thurau, C.; Kersting, K.; Wahabzada, M.; Bauckhage, C. Descriptive matrix factorization for sustainability
Adopting the principle of opposites. Data Min. Knowl. Discov. 2012, 24, 325–354. [CrossRef]
Remote Sens. 2019, 11, 224 45 of 47

70. Kersting, K.; Bauckhage, C.; Wahabzada, M.; Mahlein, A.-K.; Steiner, U.; Oerke, E.-C.; Römer, C.; Plümer, L.
Feeding the World with Big Data: Uncovering Spectral Characteristics and Dynamics of Stressed Plants.
In Computational Sustainability; Lässig, J., Kersting, K., Morik, K., Eds.; Springer International Publishing:
Cham, Switzerland, 2016; Volume 645, pp. 99–120. ISBN 978-3-319-31856-1.
71. Alfeld, M.; Wahabzada, M.; Bauckhage, C.; Kersting, K.; van der Snickt, G.; Noble, P.; Janssens, K.;
Wellenreuther, G.; Falkenberg, G. Simplex Volume Maximization (SiVM): A matrix factorization algorithm
with non-negative constrains and low computing demands for the interpretation of full spectral X-ray
fluorescence imaging data. Microchem. J. 2017, 132, 179–184. [CrossRef]
72. Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced Spectral Classifiers for Hyperspectral Images:
A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [CrossRef]
73. Kruse, F.A.; Lefkoff, A.B.; Boardman, J.W.; Heidebrecht, K.B.; Shapiro, A.T.; Barloon, P.J.; Goetz, A.F.H.
The spectral image processing system (SIPS)—Interactive visualization and analysis of imaging spectrometer
data. Remote Sens. Environ. 1993, 44, 145–163. [CrossRef]
74. Barnes, R.J.; Dhanoa, M.S.; Lister, S.J. Standard Normal Variate Transformation and De-trending of
Near-Infrared Diffuse Reflectance Spectra. Appl. Spectrosc. 1989, 43, 772–777. [CrossRef]
75. Mohd Asaari, M.S.; Mishra, P.; Mertens, S.; Dhondt, S.; Inzé, D.; Wuyts, N.; Scheunders, P. Close-range
hyperspectral image analysis for the early detection of stress responses in individual plants in a high-
throughput phenotyping platform. ISPRS J. Photogramm. Remote Sens. 2018, 138, 121–138. [CrossRef]
76. Naidu, R.A.; Perry, E.M.; Pierce, F.J.; Mekuria, T. The potential of spectral reflectance technique for the
detection of Grapevine leafroll-associated virus-3 in two red-berried wine grape cultivars. Comput. Electron.
Agric. 2009, 66, 38–45. [CrossRef]
77. Jacquemoud, S.; Verhoef, W.; Baret, F.; Bacour, C.; Zarco-Tejada, P.J.; Asner, G.P.; François, C.; Ustin, S.L.
PROSPECT+SAIL models: A review of use for vegetation characterization. Remote Sens. Environ. 2009, 113,
S56–S66. [CrossRef]
78. Mahlein, A.-K.; Steiner, U.; Hillnhütter, C.; Dehne, H.-W.; Oerke, E.-C. Hyperspectral imaging for small-scale
analysis of symptoms caused by different sugar beet diseases. Plant Methods 2012, 8, 3. [CrossRef] [PubMed]
79. Kong, W.; Huang, W.; Zhou, X.; Ye, H.; Dong, Y.; Casa, R. Off-Nadir Hyperspectral Sensing for Estimation
of Vertical Profile of Leaf Chlorophyll Content within Wheat Canopies. Sensors 2017, 17, 2711. [CrossRef]
[PubMed]
80. Gitelson, A.A.; Keydan, G.P.; Merzlyak, M.N. Three-band model for noninvasive estimation of chlorophyll,
carotenoids, and anthocyanin contents in higher plant leaves. Geophys. Res. Lett. 2006, 33, L11402. [CrossRef]
81. Gitelson, A.A.; Merzlyak, M.N.; Chivkunova, O.B. Optical Properties and Nondestructive Estimation of
Anthocyanin Content in Plant Leaves. Photochem. Photobiol. 2001, 74, 38–45. [CrossRef]
82. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between leaf chlorophyll content and spectral
reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. J. Plant Physiol.
2003, 160, 271–282. [CrossRef]
83. Vincini, M.; Frazzi, E.; D’Alessio, P. A broad-band leaf chlorophyll vegetation index at the canopy scale.
Precis. Agric. 2008, 9, 303–319. [CrossRef]
84. Jordan, C.F. Derivation of Leaf-Area Index from Quality of Light on the Forest Floor. Ecology 1969, 50,
663–666. [CrossRef]
85. Le Maire, G.; François, C.; Dufrêne, E. Towards universal broad leaf chlorophyll indices using PROSPECT
simulated database and hyperspectral reflectance measurements. Remote Sens. Environ. 2004, 89, 1–28.
[CrossRef]
86. Zarco-Tejada, P.J.; Berjón, A.; López-Lozano, R.; Miller, J.R.; Martín, P.; Cachorro, V.; González, M.R.;
de Frutos, A. Assessing vineyard condition with hyperspectral indices: Leaf and canopy reflectance
simulation in a row-structured discontinuous canopy. Remote Sens. Environ. 2005, 99, 271–287. [CrossRef]
87. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation
from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [CrossRef]
88. Gandia, S.; Fernández, G.; Garcia, J.C.; Moreno, J. Retrieval of vegetation biophysical variables from
CHRIS/PROBA data in the SPARC campaign. In Proceedings of the 2nd CHRIS/Proba Workshop, Frascati,
Italy, 28–30 April 2004; Volume 578, pp. 40–48.
89. Lichtenthaler, H.K.; Lang, M.; Sowinska, M.; Heisel, F.; Miehé, J.A. Detection of Vegetation Stress Via a New
High Resolution Fluorescence Imaging System. J. Plant Physiol. 1996, 148, 599–612. [CrossRef]
Remote Sens. 2019, 11, 224 46 of 47

90. Daughtry, C.S.T.; Walthall, C.L.; Kim, M.S.; De Colstoun, E.B.; McMurtrey, J.E. Estimating corn leaf
chlorophyll concentration from leaf and canopy reflectance. Remote Sens. Environ. 2000, 74, 229–239.
[CrossRef]
91. Wu, C.; Niu, Z.; Tang, Q.; Huang, W. Estimating chlorophyll content from hyperspectral vegetation indices:
Modeling and validation. Agric. For. Meteorol. 2008, 148, 1230–1241. [CrossRef]
92. Haboudane, D. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop
canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90,
337–352. [CrossRef]
93. Maccioni, A.; Agati, G.; Mazzinghi, P. New vegetation indices for remote measurement of chlorophylls based
on leaf directional reflectance spectra. J. Photochem. Photobiol. B Biol. 2001, 61, 52–61. [CrossRef]
94. Chen, J.M. Evaluation of Vegetation Indices and a Modified Simple Ratio for Boreal Applications. Can. J.
Remote Sens. 1996, 22, 229–242. [CrossRef]
95. Dash, J.; Curran, P.J. The MERIS terrestrial chlorophyll index. Int. J. Remote Sens. 2004, 25, 5403–5413.
[CrossRef]
96. Barnes, E.M.; Clarke, T.R.; Richards, S.E.; Colaizzi, P.D.; Haberland, J.; Kostrzewski, M.; Waller, P.; Choi, C.;
Riley, E.; Thompson, T. Coincident detection of crop water stress, nitrogen status and canopy density
using ground based multispectral data. In Proceedings of the Fifth International Conference on Precision
Agriculture, Bloomington, MN, USA, 16–19 July 2000; Volume 1619.
97. Rouse, J.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with
ERTS. In Proceedings of the NASA Goddard Space Flight Center Third ERTS-1 Symposium, Washington,
DC, USA, 10–14 December 1973; NASA: Greenbelt, MD, USA, 1974.
98. Gitelson, A.; Merzlyak, M.N. Quantitative estimation of chlorophyll-a using reflectance spectra: Experiments
with autumn chestnut and maple leaves. J. Photochem. Photobiol. B Biol. 1994, 22, 247–252. [CrossRef]
99. Gong, P.; Pu, R.; Biging, G.S.; Larrieu, M.R. Estimation of forest leaf area index using vegetation indices
derived from hyperion hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1355–1362. [CrossRef]
100. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ.
1996, 55, 95–107. [CrossRef]
101. Gamon, J.A. The photochemical reflectance index: An optical indicator of photosynthetic radiation use
efficiency across species, functional types, and nutrient levels. Oecologia 1997, 112, 492–501. [CrossRef]
[PubMed]
102. Blackburn, G.A. Quantifying Chlorophylls and Caroteniods at Leaf and Canopy Scales: An Evaluation of
Some Hyperspectral Approaches. Remote Sens. Environ. 1998, 66, 273–285. [CrossRef]
103. Merzlyak, M.N.; Gitelson, A.A.; Chivkunova, O.B.; Rakitin, V.Y. Non-destructive optical detection of pigment
changes during leaf senescence and fruit ripening. Physiol. Plant. 1999, 106, 135–141. [CrossRef]
104. Chappelle, E.W.; Kim, M.S.; McMurtrey, J.E. Ratio analysis of reflectance spectra (RARS): An algorithm for
the remote estimation of the concentrations of chlorophyll A, chlorophyll B, and carotenoids in soybean
leaves. Remote Sens. Environ. 1992, 39, 239–247. [CrossRef]
105. Roujean, J.-L.; Breon, F.-M. Estimating PAR absorbed by vegetation from bidirectional reflectance
measurements. Remote Sens. Environ. 1995, 51, 375–384. [CrossRef]
106. Guyot, G.; Baret, F. Utilisation de la haute resolution spectrale pour suivre l’etat des couverts vegetaux.
In Proceedings of the 4th International Colloquium on Spectral Signatures of Objects in Remote Sensing,
Aussois, France, 18–22 January 1988; ESA: Aussois, France, 1988; Volume 287, pp. 279–286.
107. Peñuelas, J.; Baret, F.; Filella, I. Semi-empirical indices to assess carotenoids/chlorophyll a ratio from leaf
spectral reflectance. Photosynthetica 1995, 31, 221–230.
108. Gitelson, A.A.; Merzlyak, M.N. Remote estimation of chlorophyll content in higher plant leaves. Int. J.
Remote Sens. 1997, 18, 2691–2697. [CrossRef]
109. McMurtrey, J.E.; Chappelle, E.W.; Kim, M.S.; Meisinger, J.J.; Corp, L.A. Distinguishing nitrogen fertilization
levels in field corn (Zea mays L.) with actively induced fluorescence and passive reflectance measurements.
Remote Sens. Environ. 1994, 47, 36–44. [CrossRef]
110. Zarco-Tejada, P.J.; Pushnik, J.C.; Dobrowski, S.; Ustin, S.L. Steady-state chlorophyll a fluorescence detection
from canopy derivative reflectance and double-peak red-edge effects. Remote Sens. Environ. 2003, 84, 283–294.
[CrossRef]
Remote Sens. 2019, 11, 224 47 of 47

111. Haboudane, D.; Miller, J.R.; Tremblay, N.; Zarco-Tejada, P.J.; Dextraze, L. Integrated narrow-band vegetation
indices for prediction of crop chlorophyll content for application to precision agriculture. Remote Sens.
Environ. 2002, 81, 416–426. [CrossRef]
112. Haboudane, D.; Tremblay, N.; Miller, J.R.; Vigneault, P. Remote Estimation of Crop Chlorophyll Content
Using Spectral Indices Derived from Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2008, 46, 423–437.
[CrossRef]
113. Broge, N.H.; Leblanc, E. Comparing prediction power and stability of broadband and hyperspectral
vegetation indices for estimation of green leaf area index and canopy chlorophyll density. Remote Sens.
Environ. 2001, 76, 156–172. [CrossRef]
114. Clevers, J. The derivation of a simplified reflectance model for the estimation of leaf area index. Remote Sens.
Environ. 1988, 25, 53–69. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like