Download pdf Advances And New Trends In Environmental And Energy Informatics Selected And Extended Contributions From The 28Th International Conference On Informatics For Environmental Protection 1St Edition Jorge ebook full chapter
Download pdf Advances And New Trends In Environmental And Energy Informatics Selected And Extended Contributions From The 28Th International Conference On Informatics For Environmental Protection 1St Edition Jorge ebook full chapter
https://textbookfull.com/product/advances-and-new-trends-in-
environmental-informatics-stability-continuity-innovation-1st-
edition-volker-wohlgemuth/
https://textbookfull.com/product/multimedia-tools-and-
applications-for-environmental-biodiversity-informatics-alexis-
joly/
https://textbookfull.com/product/advances-in-energy-and-
environmental-materials-yafang-han/
https://textbookfull.com/product/advances-in-visual-
informatics-6th-international-visual-informatics-conference-
ivic-2019-bangi-malaysia-november-19-21-2019-proceedings-halimah-
badioze-zaman/
https://textbookfull.com/product/international-conference-on-
mobile-computing-and-sustainable-informatics-
icmcsi-2020-jennifer-s-raj/
https://textbookfull.com/product/advances-in-materials-science-
for-environmental-and-energy-technologies-vi-1st-edition-tatsuki-
ohji/
Progress in IS
Advances and
New Trends in
Environmental and
Energy Informatics
Selected and Extended Contributions
from the 28th International Conference
on Informatics for Environmental
Protection
Progress in IS
More information about this series at http://www.springer.com/series/10440
omez • Michael Sonnenschein •
Jorge Marx G
Ute Vogel • Andreas Winter • Barbara Rapp •
Nils Giesen
Editors
vii
viii Foreword
ix
x Reviewing Committee
First of all, we would like to thank the authors for elaborating and providing their
book chapters in time and in high quality. The editors would like to express their
sincere gratitude to all reviewers who devoted their valuable time and expertise in
order to evaluate each individual chapter. Furthermore, we would like to extend our
gratitude to all sponsors of the EnviroInfo 2014 conference. Without their financial
support, this book would not have been possible. Finally, our thanks go to Springer
Publishing House for their confidence and trust in our work.
Oldenburg, May 2015.
Jorge Marx Gomez
Michael Sonnenschein
Ute Vogel
Andreas Winter
Barbara Rapp
Nils Giesen
xi
ThiS is a FM Blank Page
Contents
Part I Green IT
1 Extending Energetic Potentials of Data Centers by Resource
Optimization to Improve Carbon Footprint . . . . . . . . . . . . . . . . . . 3
Alexander Borgerding and Gunnar Schomaker
2 Expansion of Data Centers’ Energetic Degrees of Freedom to
Employ Green Energy Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Stefan Janacek and Wolfgang Nebel
3 A Data Center Simulation Framework Based on an Ontological
Foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Ammar Memari, Jan Vornberger, Jorge Marx Gomez,
and Wolfgang Nebel
4 The Contexto Framework: Leveraging Energy Awareness in the
Development of Context-Aware Applications . . . . . . . . . . . . . . . . . 59
Maximilian Schirmer, Sven Bertel, and Jonas Pencke
5 Refactorings for Energy-Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . 77
Marion Gottschalk, Jan Jelschen, and Andreas Winter
xiii
xiv Contents
Abstract The electric power is one of the major operating expenses in data centers.
Rising and varying energy costs induce the need of further solutions to use energy
efficiently. The first steps to improve efficiency have already been accomplished by
applying virtualization technologies. However, a practical approach for data center
power control mechanisms is still missing.
In this paper, we address the problem of energy efficiency in data centers.
Efficient and scalable power usage for data centers is needed. We present different
approaches to improve efficiency and carbon footprint as background information.
We propose an in-progress idea to extend the possibilities of power control in data
centers and to improve efficiency. Our approach is based on virtualization technol-
ogies and live-migration to improve resource utilization by comparing different
effects on virtual machine permutation on physical servers. It delivers an efficiency-
aware VM placement by assessing different virtual machine permutation. In our
approach, the applications are untouched and the technology is non-invasive
regarding the applications. This is a crucial requirement in the context of Infra-
structure-as-a-Service (IaaS) environments.
1 Introduction
The IP traffic increases year by year worldwide. New Information and Communi-
cation Technology (ICT) services are coming up and existing services are migrating
to IP technology, for example, VoIP, TV, radio and video streaming. Following
A. Borgerding (*)
University of Oldenburg, 26111 Oldenburg, Germany
e-mail: [email protected]
G. Schomaker
Software Innovation Campus Paderborn, Zukunftsmeile 1, 33102 Paderborn, Germany
e-mail: [email protected]
these trends, the power consumption of ICT obtains a more and more significant
value. In the same way, data centers are growing in number and size in order to
comply with the increasing demand. As a result, their share of electric power
consumption increases too, e.g. it has doubled in the period 2000–2006 [16]. In
addition, energy costs rise continuously and the data center operators are faced with
customer questions about sustainability and carbon footprint while economical
operation is an all-over goal. The electric power consumption has become one of
the major expenses in data centers.
A high performance server in idle-state consumes up to 60 % of its peak power
[11]. To reduce the quantity of servers in idle-state, virtualization technologies are
used. Virtualization technologies allow several virtual machines (VMs) to be
operated on one physical server or machine (PM). In this way the number of servers
in idle-state can be reduced to save energy [6]. However, the rising energy costs
lead to a rising cost pressure and further solutions are needed as they will be
proposed in the following.
This paper extends our contribution to EnviroInfo 2014 – 28th International
Conference on Informatics for Environmental Protection [3] and is organized as
follows: Sect. 2 motivates and defines the problem of energy efficiency and
integrating renewable energy in data centers. Section 3 gives background on
approaches relevant to energy efficiency, virtualization technology and improving
the carbon footprint. In Sect. 4, we present the resource-efficient and energy-
adaptive approach. The paper is concluded by comments on our progressing work
in Sect. 5.
2 Problem Definition
The share of volatile renewable power sources is increasing. This leads to volatile
energy availability and lastly to varying energy price models. To deal with the
variable availability, we need an approach that ensures controllable power con-
sumption beyond general energy efficiency. Thus, we need to improve the effi-
ciency of the data center using an intelligent and efficient VM placement in order to
adapt to volatile energy availability and improve carbon footprint while keeping the
overall goal to use the invested energy as efficient as possible.
The increasing amount of IT services combined with steadily raising energy
costs place great demands on data centers. These conditions induce the need to
operate a maximum number of IT services with minimal employment of resources,
since the aim is an economical service operation. Therefore, the effectiveness of the
invested power should be at a maximum level. In this paper, we focus on the
server’s power consumption and define the efficiency of a server as the work done
per energy unit [5].
In the related work part of this paper, we analyze different kinds of approaches in
the context of energy consumption, energy efficiency and integrating renewable
power. In this research approach, we want to explore which further options exist to
1 Extending Energetic Potentials of Data Centers by Resource Optimization to. . . 5
use energy efficient and how we can take effect on the data center’s power
consumption and, finally, to adapt it to available volatile renewable energy.
To take advantage of current developments, power consumption should be
increasable in times of low energy prices and reducible otherwise while we stick
to a high efficiency level in both cases. In Service Level Agreements (SLAs) for
instance, a specific application throughput within a time frame is defined. Due to
these agreements, we can use periods of low energy prices to produce the through-
put far before the time frame exceeds. In periods of high energy prices, a scheduled
decrease of the previously built buffer can be used to save energy costs.
Some approaches [4, 10, 15] use geographically-distributed data centers to
schedule the workload across data centers with high renewable energy availability.
The methodology is only suitable in big, geographically-spread scenarios and the
overall power consumption is not affected. Hence, we do not pursue these
approaches. In general, many approaches are based on strategies with focus on
CPU utilization because CPU utilization correlates with the server’s power con-
sumption directly [5]. The utilization of other server components does not have
such an effect on the server’s power consumption. However, the application’s
performance depends not only on CPU usage, but all required resources are needed
for optimal application performance. Hence, the performance relies on other com-
ponents too and we also want to focus on these other components such as Network
Interface Card (NIC), Random Access Memory (RAM) and Hard Disk Drive
(HDD) to improve the efficiency, especially if their utilization does not have an
adverse effect on the server’s power consumption. Our assumption is that the
optimized utilization of these resources is not increasing the power consumption,
but it can be used to improve the efficiency and application performance.
There are different types of applications; some applications work stand-alone
while others rely on several components running on different VMs. Components of
the latter communicate via network and the network utilization takes effect on such
distributed applications. In our approach, we want to include these communication
topology topics. However, the applications’ requirements are changing during
operation, sometimes in large scale and in short intervals. Therefore, we need an
online algorithm that acts at runtime to respond to changing values. We need to
keep obstacles at a low level by acting agnostic to the applications. The capable
approach should be applicable without the need to change the operating applica-
tions. This is a crucial requirement in the context of Infrastructure-as-a-Service
(IaaS) environments.
Being agnostic to applications means to influence their performance without
they become aware of our methodology. For example, if an application intends to
write a file on the hard disk, it has to wait until it gets access to the hard disk. This is
a usual situation an application can handle. In the wait state, the application cannot
distinguish whether the wait was caused by another application writing on the hard
disk or by our methodology.
The problem of determining an efficient VM placement can be formulated as an
extended bin-packing problem, where VMs (objects) must be allocated to the PMs
(bins). In the bin-packing problem, objects of different volumes must be fitted into a
6 A. Borgerding and G. Schomaker
finite number of bins, each of the same volume, in a way that minimizes the number
of bins used. The bin-packing problem has an NP-hard complexity. Compared to
the VM allocating problem, we have a multidimensional bin packing problem.
Instead of the object size, we have to deal with several resource requirements
of VMs.
In a data center with k PMs and n VMs operated on the PMs, the number of
configuration possibilities is described by partioning a set of n elements into k
partitions while the k sets are disjoint and nonempty. This is described by the
Stirling numbers of the second kind:
1 X k
k j k
Sn , k ¼ ð1Þ jn
k! j¼0 j
In case of a data center with 10 VMs and 3 PMs, we have S10, 3 ¼ 9330 different and
possible VM allocations to the PMs that are named as configurations in this paper.
Hence, a global bin-packing solver will not be able to deliver a VM placement
for a fast acting online approach.
The formal description of the VM placing problem relating to the bin-packing
problem is as follows: A set of virtual machines V ¼ fVM1 , . . . , VMn g and a set of
physical machines P ¼ fPM1 , . . . , PMk g is given. The VMs are represented by
their resource demand vectors di . The PMs are represented by their resource
capacity vectors cs . The resource capacity vector of a PM describes the available
resources that can be requested by VMs. The goal is to find a configuration so that
for all PMs in P:
X
j
d i cs
i¼1
work done
EðCÞ ¼
unit energy
is a suitable metric [5]. The aggregated idle times of the PMs may also indicate the
quality of the configuration.
To the best of our knowledge, this is the first approach that researches on
agnostic methodologies, without scheduling components, to control the data centers
power consumption with the aim of efficiency and the possibility to increase and
decrease the power consumption as well.
1 Extending Energetic Potentials of Data Centers by Resource Optimization to. . . 7
3 Related Work
Power consumption and energy efficiency in data centers is a topic, on which a lot
of work has already been done. In this section, we give an overview of different
approaches.
The usage of low-power components seems to offer solutions for lower energy
consumption. Meisner et al. [12] handled the question whether low power con-
sumption correlates with energy efficiency in the data center context. They discov-
ered that the usage of low power components is not the solution. They compared
low power servers with high power servers and defined the energy efficiency of a
system as the work done per energy unit. They achieved better efficiency with the
high power servers and found that modern servers are only maximally efficient at
100 % utilization.
Another potential for improvement is to let IT requirements follow energy
availability. There are some approaches [4, 10, 14] that use local energy conditions.
They migrate the server workload to data center destinations with available renew-
able power. These ideas are finally only suitable for distributed and widespread data
centers. Data center locations at close quarters typically have the same or not
significantly different energy conditions. In the latter scenario, the consumption
of renewable energy can be increased, but the efficient power usage is not taken into
consideration.
A different idea is mentioned by Krioukov et al. [9]. In this work, a scheduler has
access to a task list, where the task with the earliest deadline is at the top. This is an
earliest deadline first (EDF) schedule. If renewable energy is available, the EDF
scheduler starts tasks from the top of the task list to use the renewable energy. If less
energy is available, tasks will be terminated. In such approaches, we have to deal
with application-specific topics. To build a graded list of tasks to schedule, we
determine the duration a task needs to be processed and we need a deadline for each
task to be processed. Terminated tasks lead to application-specific issues that need
to be resolved afterwards.
The approach of Hoyer [8] bases on prediction models to calculate the needed
server capacity in advance to reduce unused server capacity. Optimistic, pessimistic
and dynamic resource strategies were presented. This approach offers methodolo-
gies to improve efficiency, but controlling the data centers power consumption is
not focused.
Tang et al. [17] propose a thermal-aware task scheduling. The ambition is to
minimize cooling requirements and to improve the data center efficiency in this
way. They set up a central database with server information, especially server heat
information. An EDF scheduler is placing tasks with the earliest deadline on the
coldest server. Thus, they avoid hot spots and cooling requirement can be decreased
to improve efficiency. The usage of a graded task list comes with the same
disadvantages as described before. To avoid dealing with application-specific
topics, the virtual machine is a useful container to place IT loads instead of explicit
application tasks. In many approaches, for example Corradi et al. [6], power
8 A. Borgerding and G. Schomaker
optimizations have already been done. Hence, we are running a set of VMs
concentrated on a small number of potential servers. Unused servers are already
switched off. As further input, we get a target power consumption value.
It is generally accepted that applications operate ideally if they have access to all
required server resources. With the aim of improving the data center’s efficiency,
resource-competing VMs should not be operated on the same physical server
together. Our approach is to create a VM allocation that concentrates VMs with
suitable resource requirements on the same physical server for ideal application
performance and efficiency. In this constellation, each application has access to the
required server resources and operates ideally. Finally, the overall server resources
are more utilized than before and the efficiency rises. Beside the increased effi-
ciency, this situation also leads to a higher power consumption and application
performance. This scenario is suitable for times of high energy availability. Fol-
lowing the idea of green energy usage, this technology is also capable of reducing
the data center’s power consumption in situations of less green power availability.
Therefore, the methodology can be used to explicitly reduce resource utilization by
combining resource-competing applications, leading to lower power consumption
but also to a potentially reduced application performance.
In data centers, applications induce specific power consumptions by their evoked
server load. This required amount of power is so far understood as a fixed and
restricted value. Our concept is to let this amount of power become a controllable
value by applying a corresponding VM allocation.
The power consumption PCdc of a data center breaks down as follows:
The total power consumption is the sum of the power consumption of all data center
components. Beside the power consumption of all PMs PCServers , we have the power
consumption of the support infrastructure PCSupport i.e. network components,
cooling components, UPS, lights, etc.
Chen et al. [5] describe the power consumption of a server as the sum of its static
(idle, without workload) power consumption PCServers idle and its dynamic power
consumption PCservers dyn: :
PCservers dyn: is the amount of power we directly take influence on. It reflects the
amount of power consumption deviance between 100 % server utilization and idle
mode. Idle servers still consume 60 % of their peak power draw [11].
Hence, a sustainable part (up to 40 %) of the server’s power consumption is
controllable; it can be increased in times of high energy availability and decreased
otherwise. Our approach is based on virtualization technology and the possibility to
live-migrate VMs. The methodology is agnostic to the operating applications. This
is an advantage compared to other task scheduling-based algorithms, since these
have to deal with task execution times and other application-specific topics. In our
10 A. Borgerding and G. Schomaker
PM 1 PM 2 PM 3
VM1 VM7
VM2 VM1
VM3 VM1
VM4 VM7
VM9 VM1 VM1VM11 VM7
VM12 VM1
VM13 VM1
VM5 VM4
VM6 VM1
VM7 VM1 VM8 VM4 VM1
VM10 VM1 VM4 VM1 VM1
HDD NIC RAM CPU HDD NIC RAM CPU HDD NIC RAM CPU
PM 1 PM 2 PM 3
VM1 VM7
VM2 VM1 VM1 VM3 VM7
VM9 VM1
VM4 VM1
VM11 VM7
VM12 VM1
VM13 VM1
VM5 VM4
VM6 VM1
VM7 VM1 VM8 VM4 VM1 VM1 VM4
VM10 VM1 VM1
HDD NIC RAM CPU HDD NIC RAM CPU HDD NIC RAM CPU
PM 1 PM 2 PM 3
VM1 VM7
VM2 VM1
VM3 VM1
VM4 VM7
VM9 VM1 VM1VM11 VM7
VM12 VM1
VM13 VM1
VM5 VM4 VM1 VM1 VM8 VM4 VM1
VM10 VM1 VM6 VM4 VM1 VM1 VM7
HDD NIC RAM CPU HDD NIC RAM CPU HDD NIC RAM CPU
Fig. 1.3 Schematic VM on physical server diagram: aim of reduced power consumption
and Frequency Scaling (DVFS) technique is used to adapt the power consumption
to the actual CPU utilization. DVFS allows a dynamical adaption of CPU voltage
and CPU frequency according to the current resource demand.
VM1 VM2 VM3 VM4 VM5 VM6 VM7 VM8 VM9 VM10 VM11 VM12
have evenly distributed server utilization. This reduces the occurrence of hot spots,
similar to the approach mentioned by Tang et al. [17].
We use a local optimizer component working on a single PM that focuses on a
solution for its own PM. This component has to find a solution for just one PM and
the set of possible VMs is reduced to the actual operated ones and to a subset of
those in the offer pool. As input, the optimizer receives a defined target power
consumption, which has to be reached with best possible efficiency.
4.2 Algorithm
that are used to build the competing resource situation. Merely the attainable CPU
utilization is a variable and implicit value that corresponds to the power consump-
tion target value.
Every PM’s optimizer strives to reach the target value by optimizing its own
situation. We have an offer pool of VMs, which can be accessed by every PM’s
optimizer. The optimizer is able to read the offered VMs from other PMs or even to
offer VMs. If the target value is greater than the actual value, the optimizer removes
suitable VMs from the pool to host until the target value is reached. If the target is
lower than the actual value, the optimizer offers VMs to the pool to reduce the own
value. Furthermore, additional VMs can be hosted from the pool to create compet-
ing resource situations to reduce the CPU utilization and to reach the target value.
Developing a reduced power consumption VM allocation can be done in three
ways:
(i) Migrate VMs to other PMs. This reduces the CPU utilization and the power
consumption by DVFS technology.
(ii) The optimizer arranges a resource competing allocation, which reduces the
CPU utilization and-as a result-decreases the power consumption by DVFS
technology.
(iii) The optimizer arranges CPU overprovisioning. CPU utilization is already at
100 % and further VMs will be hosted. The additional VMs do not increase the
PM’s power consumption but reduce the power consumption of the PM they
came from. Hence, the overall power consumption is decreasing.
The strategy to reduce the power consumption starts with (i) and is cascading
down to the methodology of (iii). At first, the target is strived with (i), if this is not
leading to the required results, we go on with (ii) and lastly with (iii). Using the
methodology of (i) means, we have no further risks of SLA-violation because the
application’s performance is not influenced. In (ii) und (iii) we potentially slow
down the applications, probably increasing the risk of SLA violations. Hence, the
methodology always starts in step (i).
The formal description of the efficiency and power consumption problem is a
follows: A set of virtual machines V ¼ fVM1 , . . . , VMn g and a set of physical
machines P ¼ fPM1 , . . . , PMk g is given. The VMs are represented by their
resource demand vectors di . The PMs are represented by their resource capacity
vectors ci . The goal is to find a configuration C so that for all PMs in P:
X
j
d i cs þ xs
i¼1
where the vector xs is an offset to control under- and overprovisioning of the server
resources on PMs while j is the total number of VMs on the PMs . We use xs to control
the resource utilization on the PMs to induce the intended server utilization and
thereby their power consumption.
14 A. Borgerding and G. Schomaker
work done
EðCÞ ¼
unit energy
On the other hand, we have the difference Δ between the PMs power consumption
PCserver and the target power consumption PCtarget :
Δðtarget, CÞ ¼ PCtarget PCserver ðCÞ
The Δ represents the deviance (positive) from the target power consumption. In case
of lower target power consumptions, a lasting deviance is the indicator to go on
with the next step (ii) or (iii).
The process of reaching a suitable VM placement and the behaviour of the
locally executed optimizer is demonstrated by the following pseudo code:
Inputs: t target power consumption for local PM, p actual PM’s power
consumption, resource utilization
Output: VM placement for local PM that evokes target power consumption
When we last left Maharbal upon the shores of the Adriatic he was a
prey to great sorrow at the loss of his dear friend Mago. But soon he
had no time for any personal feelings, for the army was once more in
motion. Hannibal, ever mindful of his dream, proceeded to follow out
the plan that the dream had suggested, namely, the devastation of
Italy. Accordingly, ever leaving a destroyed territory in his wake, he
marched onward and southward. Every village that he came across
he pillaged and burned, every town or walled city that he met he laid
siege to, captured, and destroyed. It was not a part of his plan of
campaign to allow his followers to hamper themselves with the
quantities of female slaves that they took prisoners, as there could
be no means of exportation for them. Therefore, merely delaying for
a few days’ repose after the capture of each place, he caused the
army to relinquish all the women they had taken, and so to march
on, ever forward, unhampered save by the enormous booty they had
acquired.
The power of Rome having been apparently paralysed, he, for a
considerable space, wandered whither he would, utterly unopposed.
Having traversed, from end to end, the countries of Picenum,
Campania, Samnium, and Apulia; having for months and months
devastated all the richest country in Italy, under the very eyes of the
following force of Romans, under the Dictator Fabius, surnamed
Cunctator or the Lingerer, he seized upon and carried by assault the
citadel and town of Cannæ, where there was an immense store of
provisions and materials of war belonging to the Romans. There he
rested for a time, and armed all his Libyan infantry with Roman
armour and Roman weapons. What a delight must not the
Carthaginian chief have felt, as he dealt out by the thousand to his
followers the suits of armour that he had taken from the Roman
warriors even in their own country. He now had, however, not only
the most absolute confidence in himself and his mission, but a
sarcastic delight in thus arming his forces with Roman harness to
fight against the Romans themselves. And this feeling was shared by
the men of mixed nationalities in his army, who, with feelings of
triumph, arrayed themselves in the trappings of the enemy whom
they were commencing to despise.
Meanwhile, the members of the Senate at Rome were tearing their
hair. They determined that an effort must be made, and this puny
invader, who, with such a ridiculously small force, had dared to
affront all the might of Rome, must be crushed forthwith. Despite,
therefore, the previous disasters, they girt their loins together most
manfully, and prepared for new and more determined efforts to wipe
Hannibal and all his crew off the face of the earth.
What the power of the Roman Senate, what the resolution of the
Roman people must have been, is exemplified by the fact that,
despite previous losses, they soon had in the field an army
amounting in number to more than four times the usual annual levy
of legions. For it consisted, counting horse and foot, of nearly ninety-
eight thousand men! And the Dictator, the lingerer Fabius, having
been proved a failure, and he and his master of the horse, and
sometimes co-dictator, Minucius, having been repeatedly defeated in
various small actions and skirmishes, this enormous force was
placed under the command of the two new consuls for the year,
Paullus Æmilius, and Terentius Varro, the former being a patrician of
great fame, the latter a popular demagogue of plebeian origin.
Æmilius had already greatly distinguished himself in the Illyrian war,
for which he had celebrated a splendid triumph; but as for Varro, he
was, although the representative of the people, nothing but a vulgar
and impudent bully, with no other knowledge of war than his own
unbounded assurance. When Hannibal, with his usual military
genius, had seized upon the citadel of Cannæ, these two consuls,
burning to retrieve the frequent recent disasters, arrived upon the
scene and took over the command. But after all that had gone
before, they were not sure of themselves, and therefore persuaded
the out-going consuls, Cnœus Servilius and Marcus Atillus, to remain
and join in the battle. Marcus Minucius likewise, who had been co-
dictator with Fabius, returned to the army to take part in the great
fight which, with all his rashness, he had not himself been able to
precipitate during his own term of office, but which he knew to be
imminent. He had already suffered a defeat at the hands of
Hannibal, and was burning to gain his revenge. And now he knew
that he had his chance against the comparatively small force of the
presumptuous invader, for never, in all her history, had Rome put
such an enormous army in the field.
Hannibal and his army were encamped upon some heights to the
south of a river called the Aufidus. This stream was remarkable in
one respect, it being the sole stream in the whole of Italia which
flows through the range of the Apennine mountains, rising on their
western side, passing through the hills, and falling into the Adriatic
Sea on the eastern side of the Italian Peninsula. From the excellent
situation of the Carthaginian camp, all the military dispositions of the
Romans could be easily observed, and by means of the spies
employed by old Sosilus, Hannibal was not long in being informed of
the dissensions between the two consuls. Never was there an
instance in which the disadvantage of a dual command was shown
more than upon the present occasion, when one consul was in
command of the whole force one day and the other the next, and
what the one did to-day the other undid to-morrow. For it was the
custom in the Roman army when both consuls were present to give
to each the supreme command on alternate days. It was a wonder,
however, that after the example of the co-dictators Fabius Cunctator
and Marcus Minucius, who had found it an utter failure a short time
before, that this system of daily alternate command had not been
abandoned. For Fabius and Minucius had found it so unworkable
that they had for a time divided the army into two, each taking his
own half. And with his half only, having risked a battle, Minucius was
utterly defeated owing to an ambush of cavalry prepared by
Hannibal. The late Master of the Horse and his troops were, upon
this occasion, only saved from utter destruction by the Lingerer
setting his own half of the army in motion, and coming to his rash
colleague’s assistance in the nick of time, and checking the
Carthaginian pursuit, with much loss to the triumphant Phœnician
force. After that, Minucius had wisely resigned his right to the
command, leaving the entire power in the hands of Fabius.