Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Fast and Reliable Obstacle Detection and Segmentation for

Cross-country Navigation

A. Talukder, R. Manduchi*, A. Rankin, L. Matthies


Jet Propulsion Laboratory, California Institute of Technology
Pasadena, CA 91109. Tel. (818)354-1000 – Fax (818)393-3302
[ashit.talukder,art.rankin,larry.matthies]@.jpl.nasa.gov

*University of California at Santa Cruz


Santa Cruz, CA 95064. Tel. (831)459-1479 – Fax (818)459-4829
[email protected]

Abstract For navigation indoor or in structured environment


(roads), obstacle are simply defined as surface elements
Obstacle detection (OD) is one of the main components that are higher then the ground plane. Thus, assuming that
of the control system of autonomous vehicles. In the case elevation information is available (by means of stereo
of indoor/urban navigation, obstacles are typically de- cameras or ladars), the main task of obstacle detection
fined as surface points that are higher than the ground (OD) algorithms for indoor/urban environments is to es-
plane. This characterisation, however, cannot be used in timate the ground plane in front of the vehicle. Many pa-
cross-country and unstructured environments, where the pers exist in the literature dealing with such a problem
notion of "ground plane" is often not meaningful. A pre- (see for example [Zhang94],[Williamson98],[Broggi00].)
vious OD technique for cross-country navigation This flat-world assumption is clearly not valid when
(adopted by the DEMO III experimental unmanned vehi- driving in off-road, cross-country environments. In such
cle) computes obstacle by analysing the columns of a cases, the geometry of the terrain in front of the vehicle
range image independently, looking for steps or slopes can hardly be modelled as a planar surface. Figure 1
along the range profile. This procedure, however, is shows examples of natural scenes where no distinct pla-
prone to missing obstacles with surface normal pointing nar surface can be fit as a ground surface due to inade-
away from the line of sight. We introduce a fast, fully 3-D quate number of visible ground points.
OD technique that overcomes such a problem, reducing
the risk of false-negatives while keeping the same rate of
false-positives. A simple addition to our algorithm allows
one to segment obstacle points into clusters, where each
cluster identifies an isolated obstacle in 3-D space. Ob-
stacle segmentation corresponds to finding the connected
components of a suitable graph, an operation that can be
performed at a minimal additional cost during the com-
putation of obstacle points. Rule-based classification Figure 1: Examples of natural terrain
using 3-D geometrical measures derived for each seg- In principle, one could determine the traversability of a
mented obstacle is then used to reject false-obstacles (for given path by simulating the placement of a 3-D vehicle
example, objects that are small in volume, or of low model over the computed elevation map, and verifying
height). Results for a number of scenes of natural terrain that all wheels are touching the ground while leaving the
are presented, and compared with a pre-existing obstacle bottom of the vehicle clear. This procedure, however,
detection algorithm. besides being computationally heavy, requires the avail-
ability of a high-resolution elevation map to work with.
Keywords: Autonomous navigation, obstacle detection, Maps are estimated starting from range images (from ste-
terrain perception, 3-D vision, classification, geometrical reo or ladars). Backprojecting image pixels onto the 3-D
reasoning world generates a non-uniform point set. Therefore, either
the elevation map is represented by a multiresolution
1. Introduction structure (which makes the OD task cumbersome), or it is
interpolated to an intermediate-density uniform grid,
Path planning for autonomous vehicles requires that the which may imply a loss of resolution in some regions of
map of all visible obstacles be produced in real time using the map.
the available sensing information. The obstacle-free can- On the converse, working directly on the range image
didate paths leading toward the desired position are then domain (pixel-based approach) presents two advantages:
compared in terms of their hindrance (measured, for ex- first, it is much faster than dealing with elevation maps;
ample, by the amount of steering involved [Lacaze98].)

1
second, it uses range data with the highest resolution obstacle’s surface normal points away from the slicing
available (since the data does not need to be interpolated plane Π.
into fixed-size 3-D cells.) Indeed, the elevation map ap- In this paper, we present an improved version of the
proach allows one to easily integrate range information as column-wise OD algorithm [Matthies96], which com-
the vehicle moves along and collects more sensing infor- putes 3-D slopes and yet retains most of the simplicity
mation. This functionality is certainly important for ro- and computational efficiency of the original approach.
bust path planning, especially when the scene has many More precisely, this work has three main contributions.
visual occlusions, meaning that a single view may not First, we provide a simple but rigorous definition of ob-
convey enough information. Yet, we argue that for a ve- stacle points that make sense for cross-country environ-
hicle moving forward, it is the most recently acquired ments, formalizing and extending the intuitive notion in
range image that typically contains the higher resolution [Matthies96]. Second, we derive an efficient algorithm to
range information. Hence, computing obstacles based on compute the obstacle points in a range image. Third, we
the most recent range image makes sense on the grounds present a technique to correctly segment such obstacle
of computational efficiency and of detection accuracy. points, so that isolated obstacles are identified and la-
Matthies et al. developed a fast pixel-based algorithm beled. We show that obstacle segmentation (OS) corre-
to detect obstacles for cross-country terrain [Mat- sponds to finding connected components in a suitable
thies94],[Matthies96],Matthies98]. Their technique graph built by the OD procedure. Our OS procedure
(adopted by the DEMO III eXperimental Unmanned Ve- makes full use of 3-D information, and is implemented
hicle (XUV) [Bellutta00]), measures slope and height of efficiently in terms of computations and memory.
surface patches (where “slope” is measured by the angle The paper is organized as follows: The OD algorithm is
formed by the surface normal and the vertical axis.) Fig- detailed in Section 2, followed by a discussion of our
ure 2 shows an example of a 1-D range profile, where obstacle segmentation (OS) algorithm in Section 3. In
slant (θ) and height (H) are shown for two different sur- Section 4, we discuss some of the parameters in the OD
face patches. Obstacles correspond to ramps with a slope and OS algorithms, and in Section 5, we detail our 3-D
above a certain threshold and spanning a minimum geometrical-based obstacle reasoning and classification
height. The rationale behind this approach is simple: if a method, followed by results of our algorithms and com-
surface patch has limited slope, we may assume that it parison with a pre-existing OD method in Section 6.
belongs to the ground surface (for example, as part of a
path leading to a plateau,) and therefore it is traversable. 2. Obstacle definition and algorithms for OD
If a patch is steep but of small size, it corresponds to an
obstacle short enough to be negotiable by the vehicle. In this section we give an axiomatic definition of “ob-
Thus, the lower patch in Figure 2 would probably be con- stacle” which is amenable for cross-country navigation,
sidered traversable, while the higher patch would proba- and derive a simple and efficient algorithm for obstacle
bly be considered an obstacle. detection (OD). We will show in Section 3 that a simple
In fact, the OD technique of [Matthies96] looks exclu- extension of this algorithm allows us to not only detect
sively at 1-D range profiles such as in Figure 2, because it obstacle points in an image, but also to identify regions of
analyses each column in the range image separately from points belonging to the same obstacle.
the others. This choice, which makes the algorithm very In order to introduce our algorithm, we first provide an
fast, has drawbacks in terms of detection accuracy. It is axiomatic definition of the “obstacles” we want to detect.
easy to see that any 1-D range profile corresponding to We will define obstacles in terms of two distinct points in
one column of the range image is equal to the trace left by space:
the visible surface on a slicing plane Π defined by the
Definition 1.: Two surface points p1=(p1x,p1y,p1z) and
p2=(p2x,p2y,p2z) belong to the same obstacle(and will be
called “compatible” with each other) if they satisfy the
following two conditions:
1. HT<|p2y-p1y|<Hmax (i.e., their difference in height is
larger than HT but smaller than Hmax);
2. |p2y-p1y|/||p2-p1||>cos(θT) (i.e., the line joining them
Figure 2: 1-D range profile and obstacle definition forms an angle with the vertical axis smaller than
θT);
column points in the image plane and the focal point of where HT, Hmax and θT are suitably chosen constants.
the camera. The estimated slope of this 1-D range profile
is not equal, in general, to the true slope of the visible In our definition, HT is the minimum height of an object
surface, and can actually be much smaller than that. Thus, to be considered an obstacle; Hmax is a parameter control-
an obstacle may be missed by such a technique, if the ling the size of the analysis window in the OD algorithms,
and will be discussed in Section 3; θT is the smallest value • Scan the points in Ip until a pixel pi compatible
of the slope of the steepest point of an obstacle. with p is found, in which case classify p as an ob-
Thus, a point p is classified as “obstacle” if there exists stacle point.
at least another visible surface which is compatible with • If no such pixel is found, p is not an obstacle point.
p. Definition 1, however, specifies more than just that: it
also formalizes the notion of points belonging to the same We managed to reduce the complexity of the algorithm
obstacle. This is rather useful if, beyond determining ob- from quadratic to linear in N. Note, however, that there is
stacle points, one wishes to segment the different obsta- the possibility that a pair of points (p1,p2) are tested twice.
cles visible in an image, as discussed in Section 3. If α is the proportion of obstacle points in the image, and
Figure 3 shows an illustration of the detection of obsta- K is the average number of points in each projected trian-
cle points (blue) in 3-D space based on slope and height gle on the image plane, then this algorithm requires an
measures relative to ground points (brown). expected number of 2N(α+(1-α)K) tests.
A naïve strategy for detecting all the obstacle points in Let us now introduce a second strategy, which does not
an image would thus examine all point pairs, resulting in require duplicate tests:
N2 tests. Note that testing if two points are compatible
requires 5 sums, 4 multiplications and 3 comparisons, all OD Algorithm 2.
on floating-point numbers. A more efficient algorithm can • Initialization: classify all pixels as non-obstacle.
be designed starting from the following observation. Ac- • Scan the pixels from bottom to top and from left to
cording to Definition 1, p is an obstacle point if and only right; For each pixel p:
if there exists at least one visible surface points located • Determine the set Up of pixels belonging to only
the upper truncated triangle with lower vertex in
θΤ p (see Figure 4).
• Examine all points in Up, and determine the set
HT Sp of points pi∈Up compatible with p..
θΤ • If Sp is not empty, classify all points of Sp and p
as obstacle points.

It is easy to see that each pixel is tested just once against


θΤ all the other points in the upper and lower truncated trian-
gles in OD 1. With reference to the quantities introduced
Figure 3: 3-D obstacle search method using earlier, now NK tests must be performed over the image.
double cone that locates ground pixels Thus, if α<0.5, the second algorithm results in higher
(brown) & obstacle pixels (blue). computational efficiency. More importantly, a simple
inside the double truncated cone of Figure 3. Searching modification of this algorithm allows one to easily seg-
for such points in 3-D space, however, is an expensive ment obstacles in the image, as described in the next sec-
operation. Instead, we observe that the double truncated tion.
cone centered in p projects into a double truncated trian- Figure 4 shows an illustration of our OD 2 algorithm,
gle in the image plane centered in pixel p (the projection where the search area in the image depends on the dis-
of p on the image plane1). Each such triangle has height tance of the point from the image plane.
equal to Hmaxf/pz, where f is the camera’s focal length.
θΤ
Note, however, that an image point in such triangles is not
necessarily generated by a 3-D point within the cone. HT
Thus, a strategy for detecting all obstacle points in the
θΤ
image is the following one:

Obstacle Detection (OD) Algorithm 1. k /z 1


k = f H m ax
• For each pixel p, determine the set Ip of pixels be- k /z 0
longing to the double truncated triangle centered in I m a g e P la n e
p. Define a scanning order for the points in Ip. Figure 4: Implementation of OD Algorithm 2 on
2-D image data using triangular projections.

1
As there is a one-to-one correspondence between 3-D points p in
the range image and their projections p in the image plane, we will that
two 2-D points p1 and p2 are compatible, meaning that their corre-
sponding 3-D points are.
3. Obstacle segmentation minimized. However, pixel re-coloring is an expensive
operation. Of course, one may use a hashing table, but
Definition 1 specifies only a sufficient condition for two
that requires a significant amount of additional memory.
points to belong to the same obstacle, not a necessary
The second proposed algorithm introduces an auxiliary
one. Two points may well belong to the same obstacle
labels graph, whose nodes correspond to labels used to
without being compatible (for example, if the two points
color the nodes of the point graph.
are very close to each other.) In fact, the missing “if only”
part is implicitly defined by the following transitivity
OS Algorithm 2.
property: if p1 and p2 belong to the same obstacle, and p2
Modify the inizialization line of OD Algorithm 2 as fol-
and p3 belong to the same obstacle, then p1, p2 and p3 be-
low:
long to the same obstacle. We maintain that two points p1
• Initialization: Classify all image points as non-
and pM belong to the same obstacle if and only if there
obstacle; no image point is labelled; initialize the la-
exists a chain of point pairs (p1,p2),(p2,p3),…,(pM-1,pM),
bel graph to the void set.
such that all pairs (pj,pj+1) are compatible. We can repre-
The following instructions are added to the loop on the
sent the set of points as the nodes of an undirected graph
points p in OD Algorithm 2:
(points graph); two nodes in the graph are linked if they
• If no point in {p,Sp} was already labelled, create a
satisfy the conditions of Definition 1. Thus, two points p1
new node in the labels graph and color all points in
and p2 belong to the same obstacle if and only if there
Sp and p with the corresponding label.
exists a path in the graph from p1 to p2. We can extend
• Else, if just one point in {p,Sp} was already col-
this notion to define a single obstacle as a maximal con-
ored, color p and all points in Sp with such a label.
nected subgraph (i.e., a connected component) of the
point graph. • Else, there is a label conflict: two or more distinct
Classical depth-first or breadth-first search algorithms labels {l1,…,lL} are used for the same connected
[Mehlholrn84] can find the connected components of the component. Color all unlabelled points in {p,Sp}
points graph with complexity linear in (N+M), where M is using any one of such labels (say, l1), and add
the number of edges in the graph. Note, however, that our edges in the labels graph linking the nodes corre-
OD technique does not yield an explicit graph represen- sponding to such labels. Re-color all pixels in
tation, as required by classical connected component al- {p,Sp} to label l1.
gorithm. In the following, we discuss some possible pro-
1
cedures for computing the connected components of the 2
points graph as it is being built in the loop of OD Algo-
rithm 2.
The first proposed algorithm is based on pixel re- θΤ θΤ
θΤ
coloring:
(a) (b)
Obstacle Segmentation (OS) Algorithm 1.
1 1
Modify the initialisation line of OD Algorithm 2 as fol- 2
low:
• Initialisation: Classify all image points as non-
θΤ θΤ
obstacle; no image point is labelled; initialise the la- θΤ θΤ θΤ
bel graph to the void set.
The following instructions are added to the loop on the (b) (d)
points p in OD Algorithm 2: Figure 5: Labelling process during 3-D obstacle detec-
• If no point in {p,Sp} was already labelled, color all tion: (a) Obstacles for one ground pixel; (b) for second
points in Sp and p with the corresponding label. ground pixel, (c) Merging of overlapping obstacles, and (d)
• Else, if just one point in {p,Sp} was already col- new obstacle label.
ored, color p and all points in Sp with such a label. When the procedure terminates, all nodes in the points
• Else, there is a label conflict: two or more distinct graph are labelled; the nodes belonging to any connected
labels {l1,…,lL} are used for the same connected component of the labels graph represent the set of labels
component. Choose any such label (say, l1), and coloring the nodes of one connected component of the
find the set of pixels that have been already col- points graph. Thus, in order to identify all the obstacles in
ored with any label in {l2,…,lL}; change the label the scene, one has to compute the connected components
of such pixels to l1. in the labels graph. This operation takes a negligible
amount of time if there are much fewer labels than points
OS Algorithm 1 always keeps the number of existing in the image. Note that the operation of pixel re-coloring
labels small, so that the likelihood of label conflicts is within {p,Sp} in case of label conflict is not strictly neces-
sary, but it helps reduce the likelihood of label conflict. In traversable obstacles, in which case the gaps between
fact, we noticed that only a minimum amount of over- two obstacles should be accurately located to allow
segmentation is introduced if one neglects to compute the autonomous navigation and effective movement of the
connected components of the labels graph (i.e., if each vehicle in such densely occupied landscapes.
node of the labels graph corresponds to one obstacle.)" If the search height of the cone in 3-D space is Hmax, as
Figure 5 shows the process of segmentation/labeling of specified earlier, our algorithm classifies any two occu-
obstacles that occurs implicitly in our 3-D obstacle de- pied pixels that are separated by a horizontal distance of
2Hmax/cos(θT) with the same obstacle label. Therefore,
2 1 two different obstacles that are at least Hmax in height and
separated by a horizontal distance of more than
2Hmax/cos(θT) are assigned different labels by our 3-D
obstacle detector. This implies that the spatial resolution
for obstacles that are at least Hmax in height is

Image Plane
Figure 6: Obstacle labelling in our 3-D obstacle
detector where adjacent obstacle points in 2-D im-
age space (green, blue pixels), but distant in 3-D Figure 7: Obstacle labelling and segmentation
space, are assigned unique labels.
2Hmax/cos(θT). For obstacles of height H lesser than
tector. Figures 6 and 7 show examples of a synthetic and Hmax, the spatial resolution of our algorithm is much bet-
real example where disparate objects that touch in 2-D ter; it is 2H/cos(θT) for obstacles with lower height.
space, and therefore assigned to one obstacle by a 2-D If the width of the vehicle W is lesser than the spatial
blob coloring procedure [Bellutta00], are labelled with resolving power of the algorithm, then our algorithm
different colors using our OS algorithm. would safely and accurately locate all visible, traversable
Having formalized the notion of points belonging to the paths in the terrain. This is generally true for autonomous
same obstacle, the role of the parameter Hmax should now cross-country vehicles, such as the HMMWV or the
be clear. Hmax enforces separation of two obstacles in URBIE robot testbeds used at JPL. In practice, the range
those cases where pairs of points exist, one for each ob- estimates from stereo or laser-range sensors is corrupted
stacle, satisfying the slope condition but located far apart. by noise and susceptible to measurement/estimation er-
Typically, such situations arise from missing range meas- rors, especially for points far from the sensor. This is
urements (due, for example, to poor stereo matching further magnified by errors due to incorrect sensor cali-
quality.) Rather than linking two obstacles when there is bration. Therefore, the spatial resolution of our 3-D ob-
not enough range information, the first condition in Defi- stacle detector is typically worse than the theoretical lim-
nition 1 keeps such obstacle separated. Note in passing its discussed here.
that larger values of Hmax imply larger triangles in Figure
4, and therefore higher computational complexity. On the
4.1. Parameter selection in 3-D obstacle detector
other side, too small a value for Hmax could be liable for
missing many obstacle points. An interesting case is rep- As discussed in Section 2 earlier, our 3-D obstacle de-
resented by obstacles which are not connected to the tector involves searching a cone region around each point
ground (e.g., concertina wire). For the wire to be detected in 3-D space for the presence of an obstacle. If the ground
as an obstacle, Hmax should be at least as large as the terrain is flat (horizontal), obstacle search at a point
height of the wire with respect to the ground. (x0,y0,z0) at a distance z0 from the image plane would in-
volve searching an area corresponding to the projection
4. Spatial Resolution of 3-D obstacle detector of the cone on the 2-D image, which is an inverted trian-
gle of height Hi = (HT) f/z0 (whose vertex is (x0i,y0i) in the
In order to evaluate the efficacy of our 3-D obstacle image I as x0i = x0 f/z0, y0i = y0 f/z0), and with vertex angle
detector, it is important to realise the spatial resolution 90- θT, as shown in Figure 4 earlier. This is the region Up
limitations of our algorithm. The spatial resolution deter- in OD 2.
mines how close two obstacles can be and still be seg- However, in reality, the terrain is uneven, and many
mented as two different objects. This information is ground pixels in the terrain do not lie on the camera
critical when the terrain is densely occupied by non- plane. Additionally, the camera plane may not be hori-
zontal if the vehicle is on a slope. Therefore, the projec- ary representation space envelope models the empty, un-
tion of the cone will change with terrain elevation varia- occupied volumes in the scene. Reasoning about the
tions. scene's content using surface geometry and topology is
We analyse the change in projection of the 3-D cone used to determine the number of visible objects. All these
along each spatial dimension x,y as the terrain configura- methods require creation of a 3-D model, possibly by
tion changes. When the terrain is flat, the projection (re- converting the 3-D point data into a mesh representation,
gion Up in OD 2) is a triangle with a horizontal base of which is a complex operation and often not suited for
height (HT) f/z0. If the terrain is sloped, or the camera real-time applications that have limited computational
plane tilted, it is possible that the cone generatrix (the resources. We compute 3-D geometrical features from the
slanted line with angle θT) becomes parallel to the camera raw point-cloud data, which enables real-time analysis.
plane. In such a case, the region Up is a slanted triangle In our initial research efforts, we extracted five simple
whose base is at angle θT with the horizontal. The hy- 3-D geometrical measures from each obstacle. This in-
potenuse of the triangle is parallel to the vertical axis of cluded the perimeter of the 3-D bounding box for an ob-
length ((HT2/cos2(θT)) + HT2)½ f/z0; this is larger than (HT) stacle, the average slope of the obstacle and relative
f/z0 for a projection of a cone on a ground plane that is height from surrounding background, the maximum slope,
horizontal and at the same elevation as the image plane. and the maximum relative height of an obstacle from sur-
Similarly, if the camera/vehicle, or a ground plane rounding regions. These geometrical measures are auto-
segment were tilted from the x-axis, the hypotenuse of the matically derived during the obstacle segmentation proc-
projected triangle could be parallel to the x-axis, thereby ess, without any extra computational overhead. Thresh-
yielding a projection of size (HT2cos2(θT) + HT2)½ f/z0 olds are assigned to each of these five 3-D measures. If
along the x-axis. any of the five variables have a value less than the pre-
Therefore, the shape and size of the search region Up selected thresholds, it is rejected as a false obstacle. For
varies with terrain orientation. In practice, we use a example, all obstacles with an average slope lesser than
square search window at least of size (HT2cos2(θT) + 2.5, or maximum slope lesser than 5.0 were classified as
HT2)½ f/z0 for our 3-D obstacle detection to accommodate false-obstacles. Our rule-based classification therefore
all possible terrain variations. Typically, a window of rejects obstacles with small bounding volume, aver-
about 4-5 times the minimum size is employed to ensure age/maximum slopes, or average/maximum relative
that valid obstacles are not discarded due to spatial reso- height.
lution limitations of the estimated range-from-stereo data. Our new rule-based 3-D shape reasoning and classifi-
cation is expected to outperform prior 2-D based obstacle
reasoning methods [Bellutta00] that used 2-D area infor-
5. 3-D Shape Reasoning for Obstacle Classifica- mation (not 3-D geometrical measures) to reject small-
tion using Rule-based Classifier sized false obstacles. In many cases, an object that occu-
As discussed earlier, our 3-D obstacle detection algo- pies a small number of pixels in the 2-D image does not
rithm automatically segments the obstacles to yield an imply the presence of a false obstacle, or conversely a
unique label for each obstacle in 3-D space. This facili-
tates the use of 3-D shape and geometrical measures to
effectively reject spurious false obstacles that may have
been detected.
3-D shape reasoning techniques have been used in ro-
botics in the past. Sutton et. al. [Sutton98] have used 3-D
shape reasoning in robotics by building detailed 3-D
models of an object from range data, followed by shape- (a) (b)
reasoning to label the object's potential functionality. A
model-based 3-D geometrical reasoning scheme for land
vehicles has been used [Marti96] where prior scene
knowledge with a generic 3-D model of the expected
scene and the potential objects is compared with the ac-
tual scene to do 3-D obstacle classification. Both these
techniques require detailed prior knowledge of the objects
and the scene which is not expected in real terrain navi- (b) (d)
gation scenarios, and 3-D model-matching which is com-
Figure 8: (a), (b) Obstacle regions before and (c),(d)
putationally demanding. In [Crisman98], stereo data is
after 3-D rule-based false obstacle removal.
used to locate edges and corner targets for wheelchair
navigation in relatively uncluttered, flat urban environ- large 2-D pixel area does not signify the presence of a
ments. In another approach [Hoover98], a planar bound- true obstacle. False obstacle regions that are close to the
camera could occupy a large number of pixels, and true measures for obstacle detection. As mentioned earlier,
obstacles far away from the camera could occupy a sig- this prior technique is not expected to work well on ter-
nificantly small area. rain that contains obstacles that slope along the image
Figure 8a shows a road with surrounding natural terrain plane, rather than vertically downwards along image col-
and telephone poles. The initial 3-D obstacle detection umns. Figure 9 shows an example of terrain that contains
algorithm locates all true obstacles (telephone poles, two mounds to the right of the camera with slanted slopes.
trees, etc.), but also locates false obstacles on the flat road Figure 9a shows the true detected obstacle regions using
on the upper left and upper right parts of the image (due our new 3-D obstacle detection algorithm in blue and the
to incorrect range from stereo measurements), as shown corresponding obstacle labels are shown in Figure 9b.
by the overlaid blue regions in Figure 8a. The corre- The 2-D obstacle detection algorithm [Bellutta00] results
sponding segmented obstacle label image is shown in are shown as blue regions in Figure 9c, and the rejected
Figure 8b. Our rule based classification evaluates the 3-D obstacle regions are shown in red. As seen, all of the
geometrical measures for each detected obstacle, and cor- closest mound and much of the larger mound obstacles
rectly rejects the false-obstacles on the road that have low are not detected due to the columnwise scanning tech-
average/maximum height and bounding volumes, as nique used. Additionally, parts of the smaller mound are
shown by the new labeled data in Figure 8c. The overlaid rejected (red regions) since a 2-D blob area measure is
true obstacle image after our rule-based classification is used to reject small obstacles. Note that the previous
shown in Figure 8d, where the red regions are the rejected obstacle detector using 1-D elevation profile fails to
false obstacle regions. detect the two mounds due to the fact that the estimated
slope of this 1-D range profile is not equal, in general, to
6. 3-D Obstacle Detection: Results and Com- the true slope of the visible surface, as seen in the eleva-
tion map in Figure 9(e), and the 1-D elevation profile in
parisons
Figure 9(f).
We present results on several types of terrain using our Figure 10 shows an image of a road with obstacles
new 3-D obstacle detector. These are compared against a (telephone poles, and trees) on the side. Our 3-D obstacle
prior obstacle detection technique [Bellutta00] that used detector locates all obstacles effectively (blue regions in
slope measurements along image columns and 2-D area Figure 10a), and the 3-D rule-based geometrical classifi-
cation rejects false obstacles that are located along the flat
road (red regions in Figure 10(b)). In contrast, the prior 2-
D obstacle detector is unable to reject false obstacles on
the road due to the 2-D pixel area measure used. A large
false obstacle region is also incorrectly located on the
upper right side of the image.

(a) (b)

(c) (b)

(c) (d)

(c)
Figure 10: Obstacle regions located using (a), (b)
(e) (f) our 3-D obstacle detector and (c) prior 2-D obstacle
detector
Figure 9: Obstacle regions located using (a), (b) our 3-
D obstacle detector and (c) prior 2-D obstacle detec- Figure 11 shows natural terrain with a tall bush on the
tor; (d) Range, and (e) elevation maps, and (f) 1-D right, a negative obstacle in front of the camera, and trees
elevation profile. in the background. Our 3-D obstacle detector locates all
the obstacles effectively, and rejects false obstacles near
the foot of the bush, and in the grassy terrain beyond the
negative obstacle. In this case, the 2-D obstacle detector
performs comparably, even though parts of the trees in
the background are not correctly detected. Note that our
3-D obstacle detector correctly distinguishes between the
bush and background trees and assigns unique labels to
each (Figure 11(b)), even though they overlap/touch in
the 2-D image. This allows for combined color/texture (a) (b)
and shape-based classification that could correctly clas-
sify low bushes as traversible obstacles. A 2-D obstacle
detector on the other hand would label all touching pixels
(including the background trees and bush) as one single
obstacle, that would result in overall misclassification if
color/texture and obstacle shape data were fused.

(c) (d)
Figure 12: Obstacle regions in image (a) located using
(b), (c) our 3-D obstacle detector and (d) prior 2-D ob-
stacle detector.

(a) (b)

(a) (b)
(c)
Figure 11: Obstacle regions located using (a), (b) our
3-D obstacle detector and (c) prior 2-D obstacle de-
tector.
Further results on a terrain with a large non-traversable
mound is shown in Figure 12b,c. The prior obstacle de-
tector is unable to detect the mound at all (Figure 12d) (c) (d)
since it’s normal does not intersect the slicing plane Π Figure 13: Obstacle regions in image (a) located using
defined by the column points of the image plane and the (b), (c) our 3-D obstacle detector and (d) prior 2-D ob-
focal point of the camera. A few sections that are detected stacle detector.
are discarded as false positives in the area-based blob
removal stage.

The last results (Figure 13) show a narrow path in a 7. Conclusions and Future Work
wooded area along a trail lined with bushes. These bushes In this effort, we have detailed a new 3-D obstacle de-
are correctly located as obstacles along with the trees as tection algorithm for locating and segmenting obstacles in
seen in Figure 13b,c. The 1-D obstacle detection algo- the scene for autonomous terrain vehicle navigation, and
rithm (Figure 13d) misses the bushes on the right, which a new 3-D reasoning algorithm to reject false obstacles.
is not critical since these bushes are small. Fusion of color The 3-D reasoning technique uses geometrical measures
with shape-based reasoning is expected to result in classi- that are automatically derived from the 3-D obstacle de-
fication of the small bushes on the right as traversable, tector without any extra computational overhead. Results
which will simplify navigation of the vehicle along nar- are presented on scenes of natural terrain that the military
row paths. autonomous vehicle (HMMWV) is expected to traverse
during day and/or night conditions. Our technique is seen
to outperform prior obstacle detection results currently [Matthies96] L. Matthies, A. Kelly, T. Litwin, G. Tharp,
used in real-time JPL autonomous vehicles. "Obstacle Detection for Unmanned Ground Vehicles: A
Further improvements to our 3-D obstacle detection Progress Report", Robotics research 7, Springer-Verlag.
and reasoning algorithm will include fusion of shape-
based and color or texture information to better classify [Matthies98] L. Matthies, T. Litwin, K. Owens, A. Ran-
surrounding terrain into different terrain types kin, K. Murphy, D. Coombs, J. Gilsinn, T. Hong, S. Le-
(dry/normal vegetation, bush, grass, rocks, telephone gowik, M. Nashman, B. Yoshimi, "Performance Evalua-
poles, fences, etc.). This would enable better classifica- tion of UGV Obstacle Detection with CCD/FLIR Stereo
tion of traversable objects (small-medium bushes, low- Vision and LADAR", 1998 IEEE ISIC/CIRA/ISAS Joint
lying grass, tall grass), and non-traversable objects (rocks, Conference.
poles, trees, tall bushes, steep slopes). Our method can be [Zhang94] Z. Zhang, R. Weiss, A.R. Hanson, "Qualitative
extended to analysis of multiple frames as the vehicle Obstacle Detection", IEEE CVPR'94, 554-559.
moves, where incremental 3-D obstacle detection and
reasoning could be applied to successive frames to update [Williamson98] T. Williamson, C. Thorpe, "A Special-
detected obstacles swiftly. ized Multibaseline Stereo Technique for Obstacle Detec-
The 3-D obstacle detector is currently being integrated tion", IEEE CVPR'98, 238-244.
with a dynamic terrain modeling simulation tool where
knowledge of the class and geometrical structure of each [Lacaze98] A.Lacaze, Y. Moscovitz, N. DeClaris, K.
obstacle, derived from our obstacle detector, will be used Murphy, “Path Planning for Autonomous Vehicles Driv-
to model the dynamics of the load-bearing surface as the ing Over Rough Terrain”, 1998 IEEE ISIC/CIRA/ISAS
vehicle moves over each traverseable object. This is use- Joint Conference.
ful in velocity control and prediction of vehicle motion [Broggi00] A. Broggi, M. Bertozzi, A. Fascioli, C.
over different terrain types for optimal, safe vehicle per- Guarino LoBianco, A. Piazzi, “Visual Perception of Ob-
formance and navigation. stacles and Vehicles for Platooning”, IEEE Trans. Intell.
Transport. Sys., 1(3), September 2000.
Acknowledgements
[Sutton98] Sutton, M., Stark, L., Bowyer, K., “Function
The research described in this paper was carried out by
from visual analysis and physical interaction: a methodol-
the Jet Propulsion Laboratory, California Institute of
ogy for recognition of generic classes of objects”, Image
Technology, and was sponsored by the DARPA-ITO
and Vision Computing, vol.16, no.11, pp. 745-763, 1998.
Mobile Autonomous Robot Software (MARS) Program
through an agreement with the National Aeronautics and
[Marti96] Marti, J.; Casals, A., “Model-based objects
Space Administration. Reference herein to any specific
recognition in man-made environments”, Proceedings 5th
commercial product, process, or service by trade name,
IEEE International Workshop on Robot and Human
trademark, manufacturer, or otherwise, does not consti-
Communication. 358-363, 1996.
tute or imply its endorsement by the United States Gov-
ernment or the Jet Propulsion Laboratory, California In-
[Crisman98] Crisman, J.D.; Cleary, M.E.; Rojas, J.C.,
stitute of Technology.
“The deictically controlled wheelchair”, Image and Vi-
sion Computing, vol.16, no.4, pp. 235-249, 1998.

8. References [Hoover98] Hoover, A.; Goldgof, D.; Bowyer, K.W.,


[Bellutta00] P. Bellutta, R. Manduchi, L Matthies, K. “The space envelope: a representation for 3D scenes”,
Owens, A. Rankin, “Terrain Perception for DEMO III”, Computer Vision and Image Understanding, vol.69, no.3,
2000 Intelligent Vehicles Conference,, 2000. 310-29, March 1998.

[Matthies94] L. Matthies, P. Grandjean. Stochastic


Performance Modeling and Evaluation of Obstacle De-
tectability with Imaging Range Sensors. IEEE Transac-
tions on Robotics and Automation, Special Issue on Per-
ception-based Real World Navigation, 10(6), December
1994.
[Mehlholrn84] K. Mehlholrn, Data Structures and Effi-
cient Algorithms, Springer Verlag, 1984

You might also like