Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/311622401

Modelling performance & resource management in kubernetes

Conference Paper · December 2016


DOI: 10.1145/2996890.3007869

CITATIONS READS
99 5,904

4 authors, including:

Omer F. Rana José Ángel Bañares


Cardiff University University of Zaragoza
711 PUBLICATIONS 11,928 CITATIONS 84 PUBLICATIONS 736 CITATIONS

SEE PROFILE SEE PROFILE

Unai Arronategui
University of Zaragoza
27 PUBLICATIONS 307 CITATIONS

SEE PROFILE

All content following this page was uploaded by Unai Arronategui on 23 September 2019.

The user has requested enhancement of the downloaded file.


Modelling Performance & Resource Management in
Kubernetes

Víctor Medel Omer Rana José Ángel Bañares


Aragon Institute of School of Computer Science & Aragon Institute of
Engineering Research (I3A) Informatics Engineering Research (I3A)
University of Zaragoza, Spain Cardiff University, UK University of Zaragoza, Spain
[email protected] [email protected] [email protected]
Unai Arronategui
Aragon Institute of
Engineering Research (I3A)
University of Zaragoza, Spain
[email protected]

ABSTRACT closely with the granularity of many CNA – enabling single


Containers are rapidly replacing Virtual Machines (VMs) or groups of containers to be deployed on-demand [1].
as the compute instance of choice in cloud-based deploy- Stream processing represents an emerging class of applica-
ments. The significantly lower overhead of deploying con- tions that require access to Cloud services where deployment
tainers (compared to VMs) has often been cited as one rea- overhead of launching/ deploying new VMs or Containers re-
son for this. We analyse performance of the Kubernetes mains a significant challenge. Amazon Lambda 2 provides
system and develop a Reference net-based model of resource an example of such a system, where resource provisioning
management within this system. Our model is characterised is carried out at 100ms intervals (rather than on an hourly
using real data from a Kubernetes deployment, and can be interval as with most Cloud PaaS and IaaS providers). In
used as a basis to design scalable applications that make use AWS Lambda, events generated through one or more user
of Kubernetes. streams (via other AWS services, e.g. S3, DynamoDB, Cog-
nito Authentication for mobile services etc or via a user de-
veloped application) are processed through a lambda func-
1. INTRODUCTION tion. This function is provisioned through a container-based
Kubernetes 1 provides the means to support container- deployment, where the execution of the lambda function is
based deployment within Platform-as-a-Service (PaaS) clouds, billed at 100ms intervals. The container can be frozen when
focusing specifically on cluster-based systems. Kubernetes no longer in use (whilst maintaining a temporary storage
enables deployment of multiple “pods” across physical ma- space and link to any running processing). Understand-
chines, enabling scale out of an application with dynamically ing performance associated with deploying, terminating and
changing workload. Each pod can support multiple Docker maintaining a container that hosts such a lambda function
containers, which are able to make use of services (e.g. file is therefore significant, as it affects the ability of a provider
system and I/O) associated with a pod. With significant to offer more finer grained charging options for users with
interest in supporting cloud native applications (CNA), Ku- stream analytics/ processing requirements.
bernetes provides a useful approach to achieve this. One We present a Reference Net (a kind of Petri Net [2] rep-
of the key requirements for CNA is support for scalability resentation) based performance and management model for
and resilience of the deployed application, making more ef- Kubernetes, identifying different operational states that may
fective use of on-demand provisioning and elasticity of cloud be associated with a “pod” and container in this system.
platforms. Containers provide the most appropriate mech- These states can be further annotated and configured with
anism for CNA, enabling rapid spawning and termination monitoring data acquired from a Kubernetes deployment.
compared to Virtual Machines (VMs). The process man- The model can be used by an application developer/ de-
agement origin of container-based systems also aligns more signer to: (i) evaluate how pods and containers could impact
their application performance; (ii) used to support capacity
1 planning for application scale-up.
http://kubernetes.io/

Permission to make digital or hard copies of all or part of this work for personal or 2. RELATED & BACKGROUND WORK
classroom use is granted without fee provided that copies are not made or distributed Virtual Machine virtualization and container virtualiza-
for profit or commercial advantage and that copies bear this notice and the full cita-
tion on the first page. Copyrights for components of this work owned by others than tion have attracted considerable research attention focusing
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- on performance comparison using a suite of workloads that
publish, to post on servers or to redistribute to lists, requires prior specific permission stress CPU, memory, storage and networking resources [3,
and/or a fee. Request permissions from [email protected].
4, 5]. However, to the authors’ knowledge any work has
addressed container performance issues following a rigorous
c 2016 ACM. ISBN 978-1-4503-4616-0.
2
DOI: https://aws.amazon.com/es/lambda/details/
analytical approach. Unfortunately, most computer scien- The communication between Kubernetes master & slaves
tist are either not familiar with or reluctant to use formal (called minions in Kubernetes terminology) is realised through
methods. Even mature technologies, such as cloud comput- the kubelet service. This service must be executed on each
ing, provides a small portion of the work done on perfor- machine in the Kubernetes cluster. The node which acts as
mance considering formal models [6]. In [7, 8] is proposed master can also carry out the role of a slave during execution.
an iterative and cyclic approach, starting from the specifi- As Kubernetes works with Docker containers, the docker
cation of functional algorithms (specified in the functional daemon should be running on every machine in the cluster.
models). Then, it continues with the specification of the In addition, Kubernetes makes use of the etcd project to
computational resources available (in the operational mod- have a distributed storage system over all nodes, in order
els), providing complementary views: Control flow, data- to share configuration data. A master node runs an API
flow, and resources. The central role in this methodology server, implemented with a RESTful interface, which gives
is given to a set of Petri Net (PN) models describing the an entry point to the cluster. Kubernetes uses its API ser-
required functionality and the computational resources in- vice as a proxy to expose the services executing inside the
volved in the execution. PN is a well known formalism cluster to external applications/ users.
that combines simulation and analysis techniques. The for-
malism allow a developer to analyse the behaviour of the 2.1 Kubernetes Background
system throughout the development lifecycle and to gain The basic working unit in Kubernetes is a pod – an ab-
understanding of infrastructure and application behaviour. straction of a set of containers tightly coupled with some
In particular, PNs provide different analysis and prediction shared resources (the network interface and the storage sys-
techniques that allow developers to assess functional and tem). With this abstraction, Kubernetes adds persistence to
non-functional properties by means of simulation, and qual- the deployment of single containers. It is important to note
itative/quantitative analysis. Timed Petri net enriches the two aspect of a pod: (i) a pod is scheduled to execute on one
model with time, which enables the exploration of minimal machine, with all containers inside the pod being deployed
and maximal boundaries of performance and workload. As on the same machine; (ii) a pod has a local IP address inside
simulation tool, Petri nets allow the formulation of mod- the cluster network, and all containers inside the pod share
els with realistic features (as the competition for resources) the same port space. The main implication of this is that two
absent in other paradigms (as nude queueing networks). services which listen on the same port by default cannot be
Anyway, these models must be feed with temporal infor- deployed inside a pod. A pod can be replicated along several
mation, which have been the focus of previous works. Per- machine for scalability and fault tolerance purposes. When
formance evaluation has been done mainly for traditional a service or a set of services are deployed over several ma-
virtual machine execution in Clouds [9]. In [4, 10], Vir- chines, we can consider: (1) functional level or application
tual Machines and containers are compared attending sev- level involves exposing dependencies between the deployed
eral performance metrics. In the last few years, a few pro- services. Different services need to be coordinated in or-
posals have emerged to manage container clusters like Ku- der to provide a high level functionality. An example of this
bernetes and Docker Swarm 3 . Currently, Kubernetes seems kind of relationship is the deployment of a stream processing
to be the most featured and production grade. Limited re- infrastructure (e.g. Apache Kafka, Storm, Zookeeper and
search exists about container architecture & Kubernetes per- HDFS for persistence) or the GuestBook example provided
formance. In [5], a container performance study with Docker by Kubernetes, composed of a php frontend, a redis master
shows network performance degradation in some configu- and slave. Ubuntu Juju is a reference project which works at
rations and a negligible CPU performance impact in all the functional level to coordinate the deployment of services.
configurations. Network virtualization technologies (Linux (2) operational level or deployment level involves mapping
Bridge, OpenvSwitch) are pointed to as reasons, but mainly services to physical machines, VMs, pods or containers. It is
in full nested-container configurations where network vir- platform dependant and must involve isolation between re-
tualization is used twice. Unlike Docker, Kubernetes uses sources. Kubernetes primarily focuses on the operational/
a partial nested-container approach with the Pod concept deployment level. A pod implements a service, and coor-
where network virtualization is used once, as the same IP dination between different pods is achieved through global
address is used for all containers inside a Pod, leading to variables. Services running in others pods can be discovered
better performance. The Kubernetes team reports several through a DNS. This approach imposes some restrictions
performance metrics4 . They measured the response time of to Kubernetes. For instance, in the Guestbook example,
different API operations (e.g GET, PUT, POST operations Kubernetes’ scheduler cannot ensure that the three pods
over nodes and pods) and the Pod startup end-to-end re- are deployed rightly, because Kubernetes does not manage
sponse time. In their experiments, the 99th percentile pod the application level. The communication between pods is
startup time was below three seconds in a cluster with 1000 made at application level. Kubernetes makes use of a num-
nodes. Also, they propose Kubemark, a system to evaluate ber of support services, deployed in an isolated namespace
the performance of a Kubernetes cluster 5 . kube-system, such as a logging (fluentd) & monitoring ser-
Kubernetes is based on a master-slave architecture, with vice (heapster & Prometheus), a dashboard (Graphana), etc.
a particular emphasis on supporting a cluster of machines. Kubernetes also has a specific DNS server, deployed as an
add-on inside a pod.
3
https://www.docker.com/products/docker-swarm/
4
http://blog.kubernetes.io/2016/03/1000-nodes-and- 3. STATE & PERFORMANCE MODELS
beyond-updates-to-Kubernetes-performance-and-
scalability-in-12.html Kubernetes allows us to deploy several pods over physical
5 machines while it manages their lifecycle (i.e. start/stop
https://github.com/kubernetes/kubernetes/blob/release-
1.3/docs/proposals/kubemark.md new instances of running pods). We model a pod lifecycle
Figure 1: Model of Kubernetes manager system and shared resources

in order to estimate the impact of different scenarios on the Petri Nets [2] are a formal modelling tool that can be used
deployment time and the performance of the applications to describe and understand the behaviour of concurrent and
running inside the pod. According to the pod restart policy, distributed systems. A PN is a graph with two disjoint kinds
there are two kinds of pods in Kubernetes: (i) Service Pods of nodes: places and transitions. In high-level PNs, places
– expected to be in a permanent running state – which will represent a state of the system (or subsystem) and transi-
generate a background workload in the cluster, and they may tions represent the actions (or events) that change the state.
include Kubernetes system services (e.g. monitoring and Arcs going from a place to a transition represent precondi-
logging tools) or application services; (ii) Job/batch Pods – tions that need to be fulfilled in order to fire the transition.
expected to terminate on completion. The restart policy of Likewise, arcs from transitions to places represent postcon-
a job can be onFailure or never. the pod’s execution time is ditions fulfilled when a transition is fired.
dependant on the application deployed inside the containers. We use Object Nets [11] (a type of PN [2]) with reference
If the restart policy is onFailure, the time to deploy new semantics for our model, where a token net represents a task
pod instances in failure scenarios will have impact on the or agent, with a plan for the execution procedure, that can
total performance of the job or service. The performance be executed in different machines or locations. Token nets
metrics for both kinds of pods are different. For example, are called object nets in distinction to the system net to
for a service pod the deployment time is significant, along which it belongs. Interactions between objects nets, or be-
with the response time; while for a job pod the deployment tween the object nets and the system nets are represented
time and the total execution time (including restarting, if by labels as will be illustrated in the models. We have im-
necessary) are both useful metrics. Before a pod is executed, plemented our model in Renew [12] which uses Java as a
it requests resources from the Kubernetes scheduler such as language for the inscriptions and which allows us to use tu-
RAM and CPU. If there are enough resources available in ples as tokens. Additionally, in Renew, it is possible to pass
the cluster, the scheduler chooses the best node to deploy parameters to synchronisation inscriptions. The inscriptions
the pod. The CPU request is only taken into account when in the arcs match the tokens in the net using tuple notation.
CPU-intensive processes are running. When a container is For legibility concerns in the Nets figures, a double circle
idle (e.g. it is inside a service pod and the service has low place represent the place with the same name.
use at some point), other containers can utilise the unused In Kubernetes, the lifecycle of a pod depends on the state
CPU time. With this resource model, it is easy to see that (and consequently the lifecycle) of the containers that are
the total performance of the pod depends on its resource inside it. For instance, a pod has to wait until all its con-
requests and on the total workload at the node. tainers have been created. With the Object Nets abstrac-
tion, we can represent the Kubernetes manager system as
the System Net and the Pods (with the containers) as To-
3.1 Characterising Performance Models
Table 1: Timed transitions in the model
Transition Variable
T1 Time to create a container
T2 Execution time of a container
T3 Time until next failure in a container
T4, T5 Time to restart a container.
T6, T7 Graceful Termination of a container

50

Total Deployment Time (s)


40

30

20

Figure 2: Model of the lifecycle of a Container 10


Guestbook n = 2
Guestbook n = 1
0 Jobs
ken Nets. The tokens inside our token net are containers 0 20 40 60 80
and the tokens inside our System Net represent Pods, as Total containers
illustrated in Figure 1. The details of how to create the in-
stances of the Token nets are hidden to improve legibility. Figure 3: Deployment time vs. Number of Pods
In addition, we have supposed that the scheduler assigns with a 95% confidence interval.
a pod to a single node arbitrarily, as long as the machine
has enough resources available. If there are not enough re-
sources in the cluster, the pod waits in Pending Schedul- tain the real value of these metrics. T2 and T3 transitions
ing place. This behaviour could be refined by introducing are application dependant (they represent the termination
more sophisticated policies and a reject place to pods. The time and the time to the next failure, respectively). The
place Machines represents the resources managed by the termination time (T6 and T7) and the restarting time for
scheduler. For each machine, there is a tuple token with the a container (T4 and T5) does not depend on the success
identification of the node, the amount of available RAM and of the container, so both transitions are modelled with the
the number of available cores. The resources assigned to a same distribution. Transitions in Token Net are synchro-
pod are only released when the pod restart policy is “never” nised with the System Net. For instance, when a container
or “onFailure”. is terminated, the corresponding pod in the System Net is
Once the pod has been assigned to a machine, Kubernetes moved in the high level Net.
starts creating the containers (synchronised in the model by
the inscription runCont with the Token Net in Figure 2) 3.2 Creation time
and the pod waits in Pending place. When all contain- We deploy Kubernetes on two physical machines, each
ers have been created, the pod state changes to Running. with 32GB of RAM and 12 Intel Xeon E5-2620 (2.00GHz)
While in Running state, the pod waits for its containers cores. In Figure3, we show the mean deployment time of
to terminate. If a container fails, the pod goes to Run- the Guestbook application with different configurations with
ningFailed place where it waits for the termination of all one and two machines (n=1, n=2). Each machine has pre-
containers and based on the restart policy, the containers loaded docker images. In Guestbook experiments, each pod
might be restarted. If there are no failures, the pod will be has exactly one container. The “jobs” line in the figure is the
in Running place or eventually will reach Success place same experiment with all containers in a single pod. From
when all containers have finished. the result, we can observe the overhead introduced by cre-
Figure 2 illustrates the behaviour of a container (without ating pods over the docker containers. Figure 4 depicts the
places and transitions needed to create the initial marking, mean time needed to create a container per machine in each
as before). Tokens in the net are the identifiers for each con- scenario, illustrating ∼ 0,6s deployment time for a single
tainer. For simplicity, we have included in the net the restart machine and 1s for two machine. A possible explanation
policy. A created pod enters the Running place, and may of this behaviour could be the overhead introduced to syn-
reach the Success or Failure place. The firing of the corre- chronize the deployment in different machines. With this
sponding transitions (T1, T2 and T3) is synchronised with experiment, we can estimate the value of T1 depending of
the System Net. According to the restart policy, the con- the scenario.
tainers might return to Running place or they might finish
in SuccessExit or in FailedExit places. We include sev- 3.3 Termination Time
eral timed transitions, as summarised in Table 1. By default, A Pod is expected to be terminated at some time. If it is
the firing of T2 and T3 is arbitrary and non-deterministic; a service, and consequently it has to be running all the time,
however, with Renew, it is possible to simulate any prob- the termination might due to a failure and the pod has to be
ability distribution for T2 and T3 transitions in order to restarted. This philosophy also applied to containers, as we
simulate a failure. Additionally, it is possible to assign dif- have discussed previously in the Container Net model (tran-
ferent random distributions for the timed transitions. In sitions T4, T5, T6, T7). We can consider that T3 depends
the next sections we have made different experiments to ob- on the application, and it represents the failure rate (or the
Guestbook n = 2
Guestbook n = 1 Table 3: Pov-ray experiment. Execution time (T2

Total Deployment Time (s)


1.5 Jobs
in the model) for Scenario 1 and Scenario 2 & hy-
pothesis testing.
1 Scenario 1 Scenario 2
C µ1 σ1 µ2 σ2 µ1 − µ2 = 0?
1 123.47 0.43 123.38 0.39 Yes
0.5
4 473.65 0.96 475.15 0.62 No
8 946.90 0.72 946.63 0.69 Yes
0 12 1417.76 1.67 1420.40 1.35 No
0 20 40 60 80
Total containers 20 2370.21 1.16 2374.36 3.89 Yes

Figure 4: Mean Deployment time per container per


machine vs. number of containers. Table 4: Execution time (in seconds) of bzip bench-
mark for Scenario 1 and Scenario 2 & hypothesis
testing
Table 2: T5 and T6 experimental results Scenario 1 Scenario 2
C T5 per T6 Graceful T6 per C µ1 σ1 µ2 σ2 µ1 − µ2 = 0?
Container termination Container 1 16.05 0.16 15.98 0.24 Yes
1 0.01 30 0 4 16.94 0.15 16.896 0.16 Yes
10 0.11 30.99 0.10 8 19.20 1.50 19.86 0.63 Yes
40 0.15 34.69 0.11 12 20.63 1.52 20.35 1.17 Yes
60 0.16 37.04 0.11 20 36.35 2.69 35.26 0.99 Yes
40 67.85 2.59 69.21 1.35 Yes

time between failures). When a pod terminates, Kubernetes


waits a grace period (which by default is 30 seconds) until CPU intensive application: we use the multi-threaded
it kills any associated container and data structures. To as- pov-ray 3.7 benchmark to measure overhead of pods for CPU
sociate monitored metrics with transitions, we perform the intensive use. Kubernetes permits the Docker-based CPU
following experiments: reservation for containers. If there are no contingency cases,
a container uses all available CPU cores. For multiple CPUs,
Transitions T4, T5: relate to container re-start time. We sharing is proportion to reservations. The results of the ex-
have deployed pods with a variable number of containers to periments are shown in Table 3. As expected, the execution
measure this time. Results are shown in column “T5 per time is linear to the number of parallel containers/pods ex-
Container” in Table 2. The mean time to terminate a con- ecuting pov-ray benchmark. With a single container, all the
tainer is independent of the number of containers in a pod. CPU is used by the application. As the machine has 12
Additionally, the highest experimental measure is ∼ 0.3s and cores, the performance of twelve containers should be sim-
the 80 % of the sampled times are <0.22s. The behaviour ilar to the performance of pov-ray executed on a Docker
of T4 is the same that T5 container with multi-threading disabled. With this metric
as reference value, we can calculate the overhead introduced
Transitions T6, T7: model the normal behaviour of Kuber- by Kubernetes in CPU usage (about 14%). With these re-
netes. On successful completion, Kubernetes waits for the sults, it seems that, for CPU intensive applications, it is a
grace period and deletes all data structures associated with better solution to group all containers in a same pod to re-
a container. We have measured these variables in columns duce the overhead.
“T6 Graceful termination” and “T6 per container” in Table
2. For these experiments we have set the grace period to 30s I/O intensive: we use BZip as a representative benchmark
(the default). We can observe that the overhead (Column of an I/O application. Table 4 shows the results, measuring
“T6 per container”) is independent of number of containers. execution time of n containers for compressing the Linux
Transition T7 has a similar behaviour. kernel (about 98MB) with bzip. For all set of experiments,
we have accepted H0 . With these results, we can conclude
3.4 Micro-benchmarks that the performance of I/O applications is not affected for
To measure overheads for applications deployed in Ku- the deployment scenario. Although the access to the file
bernetes, we consider the following scenarios: (i) one pod is system is shared for all containers in a pod, the overhead of
deployed and all containers are inside that pod; (ii) several that shared resource is negligible.
pods are deployed and there is exactly one container inside
the pod. The total number of containers deployed is given Network application: containers inside a pod share a net-
by C. All pods are deployed on the same machine. Each ex- work connection. To determine the impact of this for each
periment has been repeated twenty times and we present the container, we deploy an iperf server in a pod and several
mean (µi ) and standard deviation (σi ) for scenario i. For clients with the previous scenarios configuration. All tests
comparison, we consider the following test: H0 : µ1 − µ2 = 0 are run for 30s for TCP-based traffic. In Scenario 1, all con-
and H1 : µ1 − µ2 6= 0. tainers are inside the pod; in Scenario 2, all clients are inside
a pod, and all of them are scheduled to the same machine
International Conference on Utility and Cloud
Table 5: Network bandwidth with C iperf Clients in Computing (UCC), pp. 488–493, 2015.
Scenario 1. Iperf server & Iperf client are on the
[2] T. Murata, “Petri nets: Properties, analysis and
same machine.
C µ1 (GB) σ1
P
BWi /C(GB) applications,” Proceedings of the IEEE, vol. 77, no. 4,
pp. 541–580, 1989.
1 1.88 0.06 1.88
[3] S. Soltesz, H. Pötzl, M. E. Fiuczynski, A. Bavier, and
4 8.61 0.21 2.15
L. Peterson, “Container-based operating system
8 15.53 0.12 1.94 virtualization: A scalable, high-performance
12 14.99 0.21 1.25 alternative to hypervisors,” SIGOPS Oper. Syst. Rev.,
vol. 41, no. 3, pp. 275–287, Mar. 2007. [Online].
Available:
Table 6: Network bandwidth with C iperf Clients http://doi.acm.org/10.1145/1272998.1273025
in Scenario 2. Iperf server & Iperf client are in the [4] W. Felter, A. Ferreira, R. Rajamony, and J. Rubio,
same machine. H0 : µ1 − µ2 = 0? “An updated performance comparison of virtual
P
C µ2 (GB) σ2 BWi /C(GB) H0 ? machines and linux containers,” in Performance
1 1.90 0.04 1.90 Yes Analysis of Systems and Software (ISPASS), 2015
4 8.82 0.05 2.20 Yes IEEE International Symposium on, March 2015, pp.
8 16.26 0.20 2.03 No 171–172.
12 16.42 0.38 1.37 No [5] M. Amaral, J. Polo, D. Carrera, I. Mohomed,
M. Unuvar, and M. Steinder, “Performance evaluation
of microservices architectures using containers,” in
to avoid the impact of the network. The hypothesis test is 14th IEEE International Symposium on Network
stated from the summary of the bandwidth of all contain- Computing and Applications, NCA 2015, Cambridge,
ers in Scenario 1 and of all pods in Scenario 2. Results are MA, USA, September 28-30, 2015, 2015, pp. 27–34.
shown in Table 5 and Table 6. From the results, we can ob- [6] H. Khazaei, J. Misic, and V. Misic, “Performance
server that for application with more than eight containers, analysis of cloud computing centers using m/g/m/m+r
the bandwidth per container is better when we deploy each queuing systems,” IEEE Transactions on Parallel and
container in an isolated pod. This suggests that deploying Distributed Systems, vol. 23, no. 5, pp. 936–943, 2012.
several pods with a few coupled containers is better than a [7] R. Tolosana-Calasanz, J. Á. Bañares, and J. M.
single pod with a large number of containers. Colom, “Towards Petri net-based economical analysis
for streaming applications executed over cloud
4. CONCLUSION infrastructures,” in Economics of Grids, Clouds,
Systems, and Services - 11th International
A performance model for Kubernetes based deployment
Conference, GECON’14, Cardiff, UK, September
is outlined. With emerging interest in applications (such
16-18, 2014., ser. LNCS, vol. 8914, 2014, pp. 189–205.
as stream processing) which need to launch and terminate
[8] A. Merino, R. Tolosana-Calasanz, J. Á. Bañares, and
instances on a per second basis, the overhead associated
J. M. Colom, “A specification language for
with VMs remains a limitation. We use a benchmark-based
performance and economical analysis of short term
approach to better characterise behaviour of a Kubernetes
data intensive energy management services,” in
system (using Docker containers). We propose a Reference
Economics of Grids, Clouds, Systems, and Services -
net-based model for Pod & container lifecycle in Kubernetes.
12th International Conference, GECON 2015,
Such a model can be used as a basis to support: (i) capacity
Cluj-Napoca, Romania, September 15-17, 2015, ser.
planning and resource management; (ii) application design,
LNCS, vol. 9512, 2015, pp. 147–163.
specifically how an application may be structured in terms
of pods and containers. Our future work involves undertak- [9] J. Hwang, S. Zeng, F. y Wu, and T. Wood, “A
ing further analysis on the model to study its properties and component-based performance comparison of four
infer potential behaviours to support application scaling. hypervisors,” in 2013 IFIP/IEEE International
Symposium on Integrated Network Management (IM
2013). IEEE, 2013, pp. 269–276.
Acknowledgments [10] M. Raho, A. Spyridakis, M. Paolino, and D. Raho,
This work was supported in part by: The Industry and In- “Kvm, xen and docker: A performance analysis for
novation department of the Aragonese Government and Eu- arm based nfv and cloud computing,” in Information,
ropean Social Funds (COSMOS group, ref. T93) and the Electronic and Electrical Engineering, 2015 IEEE 3rd
Spanish Ministry of Economy (Programa de I+D+i Estatal Workshop on Advances in. IEEE, 2015, pp. 1–8.
de Investigación, Desarrollo e innovación Orientada a los [11] R. Valk, “Object petri nets: Using the nets-within-nets
Retos de la Sociedad âĂŞTIN2013-40809-R). V. Medel was paradigm, advanced course on petri nets 2003 (j.
the recipient of a fellowship from the Spanish Ministry of desel, w. reisig, g. rozenberg, eds.), 3098,” 2003.
Economy and from the Fundación Ibercaja-CAI. [12] O. Kummer, F. Wienberg, M. Duvigneau,
J. Schumacher, M. Köhler, D. Moldt, H. Rölke, and
5. REFERENCES R. Valk, “An extensible editor and simulation engine
[1] S. Brunner, M. Blochlinger, G. Toffetti, J. Spillner, for petri nets: Renew,” in International Conference on
and T. M. Bohnert, “Experimental evaluation of the Application and Theory of Petri Nets. Springer,
cloud-native application design,” IEEE/ACM 8th 2004, pp. 484–493.

View publication stats

You might also like