Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Journal of Network and Computer Applications 110 (2018) 1–10

Contents lists available at ScienceDirect

Journal of Network and Computer Applications


journal homepage: www.elsevier.com/locate/jnca

Review

Optimization of live virtual machine migration in cloud computing: A


survey and future directions
Mostafa Noshy * , Abdelhameed Ibrahim, Hesham Arafat Ali
Computer Engineering and Systems Dept., Faculty of Engineering, Mansoura University, Egypt

A R T I C L E I N F O A B S T R A C T

Keywords: In the growing age of cloud computing, shared computing and storage resources can be accessed over the
Virtualization Internet. Conversely, the infrastructure cost of the cloud reaches an incredible limit. Therefore, virtualization
Hypervisor
concept is applied in cloud computing systems to help users and owners to achieve better usage and efficient
Virtual machine
management of the cloud with the least cost. Live migration of virtual machines(VMs) is an essential feature of
Live virtual machine migration
Downtime
virtualization, which allows migrating VMs from one location to another without suspending VMs. This process
has many advantages for data centers such as load balancing, online maintenance, power management, and
proactive fault tolerance. For enhancing live migration of VMs, many optimization techniques have been applied
to minimize the key performance metrics of total transferred data, total migration time and downtime. This paper
provides a better understanding of live migration of virtual machines and its main approaches. Specifically, it
focuses on reviewing state-of-the-art optimization techniques devoted to developing live VM migration according
to memory migration. It reviews, discusses, analyzes and compares these techniques to realize their optimization
and their challenges. This work also highlights the open research issues that necessitate further investigation to
optimize the process of live migration for virtual machines.

1. Introduction Live migration of VMs is a major advantage of virtualization, which


is a powerful tool to manage cloud environment resources. A VM can
Cloud computing depends basically on a major technology called be migrated seamlessly and transparently from one physical machine
virtualization. That technology was started in the 1960s by IBM as a (source host) to another (destination host), while the VM is still run-
transparent way to provide interactive access to mainframe computers, ning during migration. Load balancing (Wood et al., 2007) is the main
where time-sharing and resource-sharing allow to multiple users (and benefit for live VM migration to migrate VMs from over-loaded servers
applications) to use huge size and highly expensive hardware concur- to light-loaded ones to relieve loads on congested hosts. Another bene-
rently (Rose, 2004; Nanda and cker Chiueh, 2005). Nowadays, rapid fit for live VM migration is power management (Nathuji and Schwan,
technological development of processing power and storage has made 2007), where VMs that run light-load jobs can be consolidated into
computing resources more and more abundant, cheaper and power- fewer numbers of servers to minimize IT operation expenses and power
ful than before. Thus, cloud computing trend has emerged due to this consumption by shutting down some servers after migration completion
development, where computing and storage resources can be delivered from these servers. Proactive fault tolerance and online maintenance
to multiple users over the Internet in an on-demand fashion (Zhang (Nagarajan et al., 2007) are also benefits for live VM migration, where
et al., 2010a). Therefore, modern cloud computing environments can VMs can be migrated to another server to be able to make some upgrade
exploit virtualization technology to increase resources utilization and or maintenance for physical machines or to avoid expected faults before
reduce both computational and energy costs. Virtualization technology their occurrence. VM migration can also increase application perfor-
allows running multiple operating systems (OSs) on a single physical mance by migrating some VMs which need more resources from their
machine with high performance. Each OS runs on a separate Virtual limited resources servers to rich resources ones. Today, many hyper-
Machine (VM), which is controlled by a hypervisor. visors support live VM migration such as Xen (Barham et al., 2003),

* Corresponding author.
E-mail addresses: [email protected] (M. Noshy), [email protected] (A. Ibrahim), [email protected] (H.A. Ali).

https://doi.org/10.1016/j.jnca.2018.03.002
Received 11 November 2017; Received in revised form 18 February 2018; Accepted 5 March 2018
Available online 9 March 2018
1084-8045/© 2018 Elsevier Ltd. All rights reserved.
M. Noshy et al. Journal of Network and Computer Applications 110 (2018) 1–10

Fig. 1. Virtualization (Rose, 2004).

KVM (Kivity et al., 2007), VMware (Vmware) and Microsoft Hyper-V copy and post-copy, in addition to the hybrid one. Section 3 states the
(Microsoft). key performance metrics which used to assess live VM migration tech-
Live migration of VMs has solved the residual dependency prob- niques. Then, different optimization techniques that seek more efficient
lem, which process migration (Milojičíč et al., 2000) suffers from. In migration are discussed. Section 4 introduces research direction issues.
process migration, the source host must remain available and network- Finally, section 5 concludes this work.
accessible after process migration. In contrast, the source host can be
decommissioned after VM migration. Live VM migration has become a 2. Background
hot research topic since its appearance (Clark et al., 2005). Thus, opti-
mization of live VM migration has attracted the attention of academic Live VM migration process depends basically on virtualization tech-
and industrial researchers. They try to improve migration process by nology. This section will first briefly overview virtualization technol-
minimizing total data transferred, total migration time and downtime. ogy. Then, we will explore the process of live VM migration and its
Most of the researches focus on local area network (LAN) domain where main approaches.
the source machine and the destination machine are on the same LAN
with almost shared storage between the source host and the destina- 2.1. Virtualization
tion one. Thus, their main focus is to migrate memory with minimal
time and without performance disruption. VM should also retain the Virtualization technology facilitates sharing hardware resources to
same IP address after migration by generating unsolicited ARP reply increase resource utilization and decrease operation cost, where some
to advertising that the IP has moved to another location (Clark et al., operating systems can use the same hardware. It enables each operat-
2005). There are also many researches that focus on migrating VMs in ing system to run on isolated and secured virtual environment, which
Wide Area Network (WAN) domain (Travostino et al., 2006; Bradford is called virtual machine, to provide an illusion for them as if they are
et al., 2007; Harney et al., 2007; Mashtizadeh et al., 2014; Arif et al., running on actual hardware. Virtual Machines are instances of the phys-
2016), where storage is not shared and must be transferred in addi- ical machine that created and managed by a software layer above the
tion to memory. These researches use some techniques such as dynamic physical machine, which is called a hypervisor or a Virtual Machine
DNS, IP tunneling, and mobile IP. Monitor (VMM). Fig. 1 depicts the relation between virtual machines
Although many kinds of literature have been devoted to VM migra- and the physical machine. Hypervisors guarantee required resources for
tion area; we focus on studying optimization techniques that have been each virtual machine including a virtual processor, a virtual memory,
proposed in the context of memory migration. Storage is almost shared and other virtual resources. They ensure for administrators of data cen-
in data centers; thereby memory migration is the main bottleneck in ters that if any of virtual machines is crashed or hijacked, other virtual
live VM migration due to frequent dirtying of VM memory pages during machines on the same physical machine are not affected. Hypervisors
migration. Therefore, the main contribution of this article is to review have two types, Type 1 and Type 2. In Type 1, the hypervisor runs
existing optimization techniques in memory migration. It explores pre- directly on top of the bare-metal hardware. In contrast, hypervisors
copy, post-copy and hybrid approach, which are the main approaches run on a host operating system in Type 2. One of the most important
to live migrate VMs. The key performance metrics that used to evalu- advantages of virtualization, which is provided by most hypervisors, is
ate the performance of migration are presented. We discuss the manner live virtual machine migration. Thus, we will focus on this feature to
of each technique to minimize total data transferred, total migration reach a better understanding of its different optimization techniques.
time or downtime. We also classify the optimization techniques into
compression techniques, deduplication techniques, checkpointing tech- 2.2. Virtual machine migration
niques and other ones. Optimization techniques in each category are
discussed, analyzed and compared to understand their contribution and VM migration techniques seek to significantly improve manageabil-
limitation better. This survey also seeks to introduce some hot research ity of data centers and clusters by transferring a complete state of the
directions that deserve more research efforts and points out to chal- VM from source host to the destination host. It is necessary to refer to
lenges of these directions. These research directions include achieving the network architecture for data centers in the context of VM migra-
more optimization in memory migration, facing challenges of migra- tion as shown in Fig. 2. VMs can be migrated from one server to another
tion over WAN links, studying power cost during live VM migration, one in the same rack through Top-of-Rack (ToR) switches. They can be
live migration of multiple VMs and studying security issues to protect also migrated to a server in another rack through data center core net-
the migrated VMs against attacks. work. To accomplish the migration process of the VM, its states need to
The rest of the paper is organized as follows: Section 2 explores be migrated which include (memory, storage, and virtual device states).
virtualization and VM migration process. This section also discusses Fig. 3 shows the migration process for a VM from one host to another.
the two main approaches to live VM migration process, which are pre- This migration process can be live or non-live.

2
M. Noshy et al. Journal of Network and Computer Applications 110 (2018) 1–10

Fig. 2. Network architecture for data center


(Canali et al., 2017).

Fig. 3. Virtual machine migration.

Non-live VM migration approaches (Kozuch and Satyanarayanan, migration one. In this migration approach, the memory state of VM is
2002; Whitaker et al., 2004) depend on suspending the virtual machine transferred in iterations from the source host to the destination host,
at the source host, and then the VM’s state is migrated. Finally, the whereas the VM is still running on the source host. In the first iteration,
VM at the destination host is resumed. Non-live VM migration does not the entire memory pages of the VM are transferred to the destination
achieve transparency, where the user notices an interruption during host. The transmitted memory page is resent to the destination in the
migration. next iteration if it is dirtied. Thus, only modified pages by the running
In contrast, VM continues to run during migration without disrupt- VM are transferred to the destination host in several iterations. This
ing service in live VM migration techniques. Thus, users should not be iterative transferring of dirtied pages continues until either the number
aware of the migration due to the transparency of live VM migration of dirtied pages reaches a predefined threshold or the number of itera-
techniques. Now, we will discuss live VM migration techniques in more tions reaches a preset number. Then, the VM is suspended at the source
detail. Although VM’s full state must be transferred to the new loca- host, and all the remaining memory pages in addition to CPU state such
tion, the bottleneck is transferring memory pages. There are two main as register values are migrated to the destination host. Finally, VM is
approaches to live VM migration, pre-copy and post-copy, which are restarted at the destination host, and the copy at the source host is
commonly used in live VM migration techniques. Therefore, we will destroyed. Pre-copy seeks to reduce the amount of data needs to be
explore them in addition to the hybrid approach in the following sub- migrated during downtime, which is the time during which the VM is
sections. These main approaches are shown in Fig. 4. suspended. Therefore, this approach reduces VM downtime and appli-
cation degradation. However, transferring of duplicated memory pages
prolongs the total time of migration and consumes the network band-
2.2.1. Pre-copy width.
Pre-copy (Clark et al., 2005; Nelson et al., 2005; Ibrahim et al., In pre-copy approach, the source host keeps the newest state of the
2011) is the predominant approach in live VM migration. Most hyper- VM’s memory until the migration process is completed. Therefore, this
visors, such as Xen (Barham et al., 2003), KVM (Kivity et al., 2007) approach is reliable because the migration can be aborted and VM at
and VMware (Vmware) hypervisors, use this approach as the default

3
M. Noshy et al. Journal of Network and Computer Applications 110 (2018) 1–10

Fig. 4. Live virtual machine migration approaches.

source host can continue running when the destination host crashes. 3. Optimization techniques

Before discussing the different techniques to optimize live VM


2.2.2. Post-copy migration process, we must mention the main performance metrics
Post-copy approach (Hines et al., 2009; Hines and Gopalan, 2009) is which used to evaluate the performance of migration techniques. There
suggested to decrease total migration time. First, the VM which is run- are three major metrics, which are used in the evaluation:
ning on source host is suspended. Then, this approach transmits only 1. Total migration time: It is the duration between starting the migra-
the processor state to the destination host. Finally, the VM at destina- tion process and the time when the VM is no longer available on
tion host is started, and then memory pages are actively pushed from the source host. Migration techniques seek to reduce this duration
the source host to the destination host. A page fault will be generated if as possible.
the VM at the destination host wants to read a not transferred memory 2. Downtime: It is defined as the period during which the VM is unre-
page. Thus, this faulted page is fetched from the source host. This migra- sponsive due to suspending the VM execution. Migration researches
tion approach thus ensures that each memory page is transferred at also seek to increase the performance of the migration by reduc-
most once by avoiding the duplicate transmission overhead of pre-copy ing downtime as possible, which make the migration process more
approach. Therefore, it reduces the total migration time, especially for transparent.
write-intensive applications. Unfortunately, this strategy may increase 3. Total data transferred: It is defined as the overall transferred data
downtime and suffer from performance degradation, when there are during the migration process. The more this data is reduced, the
numerous page faults. These page faults will cause the VM to wait for more network consumption is reduced.
fetching these pages after VM’s start on the destination host.
This approach is less reliable than pre-copy approach. If the VM at Now, we will review state-of-the-art existing optimization tech-
the destination host crashes, then the source host has not the newest niques as shown in Fig. 5. These techniques are summarized in
copy of the VM since the destination host has the most up-to-date copy Tables 1–3.
of the VM. On the other hand, if the VM at source host crashes, then
the VM at destination host cannot continue executing the VM with only 3.1. Compression
a part of the whole state. Therefore, the migration, in contrast to pre-
copy, cannot be aborted during migration. Many optimization techniques tend to use compression algorithms
to decrease total data transferred, thereby decrease total migration
time. These techniques utilize regularities in memory pages to encode
2.2.3. Hybrid them at the source host into fewer data before they are transferred.
Hybrid approach (Sahni and Varma, 2012) seeks to reduce down- On the other hand, the reflected operation, which is called decompres-
time and performance degradation which post-copy approach suffers sion, is applied at the destination host to retrieve the original data.
from due to faulted pages. It also tries to decrease total migration time Although compression algorithms cause some CPU overhead, utilizing
and total data transferred in comparison to pre-copy approach. Conse- the available network bandwidth and minimizing total migration time
quently, the most frequently used of VM memory pages are transferred. have the first priority. Jin et al. (2009) proposed a technique to decrease
Then, it suspends the VM at source host to transfer the minimum state the transferred data during migration based on memory compression,
of the VM to the destination host. Finally, the VM is resumed at the des- which is called (MECOM) for short. This technique is considered the
tination host. The remaining memory pages are retrieved from source first use of memory compression to accelerate the migration process.
host to the destination host after the VM is resumed at the destination Data being transferred in each round are first compressed at the source
host. This approach is also like post-copy not reliable. host by MECOM’s algorithm and then decompressed at the destina-

Fig. 5. Taxonomy of live VM migration optimization techniques.

4
M. Noshy et al. Journal of Network and Computer Applications 110 (2018) 1–10

Table 1
Live VM migration optimization techniques based on compression.
Optimization Approach VMM Technique Optimization Limitation
technique
MECOM Pre-copy XEN Use simple and fast compression algorithm Reduce downtime with 27.1%, total About 30% CPU overhead due to
(Jin et al., 2009) for memory pages with high similarity and data transferred with 68.8% and total compression and decompression.
high compression ratio algorithms for ones migration time with 34.93%.
with low similarity.
Delta compression Pre-copy KVM Send only delta page between old memory Decrease downtime and total migra- Compression and decompression
(Svärd et al., 2011) page and the new one. Use RLE for com- tion time. introduce CPU overhead. Ade-
pressing delta pages. quate cache for memory pages is
required.
ME2 Pre-copy XEN Identifying used memory pages, then send- Reduce downtime with 47.6%, total CPU overhead due to compres-
(Ma et al., 2012) ing them after compression by RLE. data transferred with 50.5%, and total sion and decompression. Over-
migration time with 48.2%. head due to scanning memory.

Table 2
Live VM migration optimization techniques based on deduplication such as MDD (Zhang et al., 2010b) and based on checkpointing such as CR/TR-Motion (Liu et al., 2009) and
Vecycle (Knauth and Fetzer, 2015).
Optimization Approach VMM Technique Optimization Limitation
technique
MDD Pre-copy XEN Reduce transferred data using XOR Reduce the total data transferred with About 47.21% on average CPU
(Zhang et al., 2010b) between similar pages. Redundant data is 56.60%, the downtime with 26.16%, overhead due to data deduplica-
converted to continuous zeros blocks. RLE and the total migration time with tion.
is used for encoding. 32%.
CR/TR-Motion Pre-copy XEN Transferring a checkpoint of the VM. Reduce on average 72.4% of down- 8.54% on average application per-
(Liu et al., 2009) Then, transferring iteratively small system time, 31.5% of total migration time formance overhead due to check-
execution logs which are replayed at the and 95.9% of total transferred data. pointing and logging.
destination.
Vecycle N/A KVM Store a checkpoint of the transferred VM at Reduce transferred data and total Storage overhead due to storing
(Knauth and Fetzer, the source host, which can be used if the migration time if migrated VM is checkpoints. CPU overhead due to
2015) VM is transferred back to the source host. migrated again to its source host. calculating checksums. Only use-
ful when VM is migrated again to
its source.

tion host. Therefore, this technique also minimizes downtime and total pages (low similarity).
migration time. To balance the VM migration cost and performance,
Their experiment results for MECOM, compared with the default pre-
an algorithm was designed based on compression which consists of the
copy technique of Xen, show that the technique can minimize the total
two following parts:
migration time with 34.93%, the downtime with 27.1%, and the total
1. Determining pages which contain many zero bytes or high similar- data transferred on average with 68.8%. However, there is some CPU
ity, which are called strong regularities pages. overhead, about 30%, due to the compression and decompression pro-
2. Choosing an appropriate compression algorithm. It chooses a fast cesses. This overhead may be acceptable in practice due to the avail-
and simple compression algorithm for strong regularities pages and ability of CPU resource today with the development of multi-core tech-
an algorithm with high compression ratio for weak regularities nology. This technique can achieve better results with workloads that

Table 3
Other live VM migration optimization techniques.
Optimization Approach VMM Technique Optimization Limitation
technique
Reuse distance Pre-copy XEN Measure distance between mem- Reduce total migration time, CPU overhead due to dis-
(Alamdari and Zamanifar, 2012) ory page and its next update to downtime and total data trans- tance calculation for memory
skip transferring Memory pages ferred for VMs with write- pages.
with low distances. intensive applications.
TPO Pre-copy XEN Reduce transferring of memory Reduce downtime with 3%, total 0.05% space overhead and
(Sharma and Chawla, 2016) pages based on three phases. data transferred with 71%, and CPU processing overhead to
total migration time with 70% for predict frequently updated
high workloads. pages.
Sonic Migration Pre-copy XEN Avoid transferring soft pages. Reduce 68.3% of total migration CPU and memory overhead in
(Koto et al., 2012) time and 83.9% of network traf- VM-suspend phase.
fic.
Agile migration hybrid KVM Transferring hot memory pages of Reduce memory pressure, total Retrieving faulted pages may
(Deshpande et al., 2016) VM, whereas cold memory pages migration time and total data increase the downtime.
are evicted to a portable per-VM transferred. Eliminate transfer-
swap device. ring of cold pages without causing
residual state at the source.
Introspection-Based Memory Pruning Pre-copy KVM Avoid transferring both free and Reduce 72% on average of total Time overhead due to intro-
(Wang et al., 2017) cache memory pages. migration time. spection of pages and han-
dling missing cache pages.

5
M. Noshy et al. Journal of Network and Computer Applications 110 (2018) 1–10

contain zero bytes or high similarity which leads to less total data trans- (Deshpande et al., 2012) to avoid transferring identical memory pages,
ferred, less total migration time, more network bandwidth utilization depending on inter-rack similarity, not self-similarity when migrating
with less processing overhead. multiple VMs across racks. There are another theses such as (Riteau et
Svärd et al. (2011) modified KVM hypervisor, to use delta com- al., 2011; Deshpande et al., 2011, 2017; Al-Kiswany et al., 2011; Zhang
pression, to optimize the throughput of migration and decrease down- et al., 2017) applied deduplication to optimize live VM migration.
time. They use a live migration algorithm that is called (XBRLE), which
applies an Exclusive-OR binary operation with Run Length Encoding. 3.3. Checkpointing
This algorithm does not send the dirtied page, but it sends only the
delta between the old memory page, which is cached, and the new dirt- Liu et al. (2009) introduced a technique that adopts the technol-
ied one by applying the Exclusive-OR operation. Then, RLE is used to
ogy of Checkpointing/Recovery and Trace/Replay, which is called for
compress the delta page before it is transferred. This compressed delta
short CR/TR-Motion, to make VM migration faster and more transpar-
page is decompressed at the destination host. Finally, an Exclusive-OR
ent. This technique transfers a checkpoint of the working VM while the
operation is applied between the delta page and the old memory page VM is still running on the source host. Then, it transfers iteratively the
to reconstruct the new memory page. Workloads whose memory pages small system execution log of non-deterministic events (time and exter-
are dirtied repeatedly and environments with low network bandwidth nal input) instead of the large size of VM’s memory pages. These logs
will benefit more from this technique. Although avoiding to transfer full
are replayed in the destination host to reconstruct a consistent VM’s
dirtied pages can achieve good results, the proposed technique needs a
memory image. Finally, the VM at the source host is suspended when
large cache memory to store memory pages.
the log size is reduced to a threshold value. Therefore, the remaining
Ma et al. (2012) adopted a technique to reduce the transferred mem- log file is transferred to the destination host to be replayed there, where
ory pages to minimize the total migration time and downtime. Their VM can normally run at the destination host instead of the source host.
technique is based on Memory Exploration and Memory Encoding, so Experiment results compared to pre-copy approach show that this
it is named (ME2) for short. ME2 explores the memory to identify
technique can reduce up to 72.4% of downtime, 31.5% of total migra-
the unallocated memory pages and transfers only the used memory
tion time and 95.9% of total transferred data on average. However,
pages. It compresses the allocated memory pages before transferring
this technique causes about 8.54% overhead due to migration. This
with Run Length Encoding algorithm (RLE). Their experiment results overhead is small compared to the great improvement in total migra-
for ME2, compared with the default pre-copy technique of Xen, show tion time, downtime and total data transferred. This technique may
that their technique can reduce on average the downtime with 47.6%,
achieve good results if it is used in WAN environments which have low-
the total transferred data with 50.5% and the total migration time with
bandwidth, but it must be taken into account difference between log
48.2%. Although scanning the memory pages to decide the allocated
growth rate and the network transfer rate.
pages that should be transferred causes some time overhead, it is com- Vecycle, 2015 (Knauth and Fetzer, 2015) depends on storing a
pensated by a reduction in total migration time. Compression is also checkpoint of the transferred VM to the local disk of the source host.
applied in another theses such as (Hu et al., 2013; Hacking and Hudzia, If the VM is transferred again to the source host, this checkpoint will
2009; Zhang et al., 2010b). In general, applying compression algo-
decrease total transferred data and total migration time. In this case,
rithms in optimization techniques of live VM migration can have an
only memory pages which are updated need to be migrated. These
effective role in decreasing total data transferred and total migration
updated pages are determined by using the checksum. Thus, this tech-
time that almost outperforms both time and processing cost for com- nique decreases total migration time, where retrieving memory pages
pression/decompression. from local storage is faster than retrieving them from slow networks.
However, this technique may cause only storage overhead, due to stor-
3.2. Deduplication ing checkpoints, if the VM is not transferred back to the source host or
if VM memory is changed drastically.
Some optimization techniques tried also to reduce amount of
migrated data but relying on deduplication of data. Based on dedu-
3.4. Other optimization techniques
plication, they can save network bandwidth and reduce total migration
time by getting rid of transferring identical or similar memory pages.
3.4.1. Reuse distance
Zhang et al., 2010 (Zhang et al., 2010b) proposed a technique based on
Alamdari and Zamanifar, (2012) used an algorithm which is called
deduplication of data, which is called Migration with Data Deduplica-
reuse distance. This algorithm minimizes the number of dirty memory
tion (MDD). This proposed technique can reduce network consumption
pages which are sent in each iteration. It measures the distance between
and reduce also both total migration time and downtime by minimizing
the memory page and its next update in each iteration. The memory
the total data transferred during migration. It depends on hash-based
pages with low distances are updated (dirtied) frequently, these pages
fingerprints technology to determine similarity among memory pages.
are called skipped pages, and they are not transferred in the iteration.
It utilizes self-similarity of the memory to decrease the transferred data
Experiment results show that this technique can reduce total migration
by applying Exclusive-OR operation between identical or similar pages
time, downtime and total transferred data for VMs with write-intensive
to convert the redundant data to continuous zeros blocks. Then, it can
applications because it reduces the transfer of frequently dirtied pages.
eliminate the redundant data by encoding the parity with the simplest
encoding algorithm (RLE).
Experiment results for MDD, compared with the default pre-copy 3.4.2. Three phase optimization
technique of Xen, show that this technique can reduce the total migra- Sharma and Chawla, (2016) proposed a technique that depends on
tion time with 32%, the downtime with 26.16%, and the total data three phases to optimize live virtual machine migration, which is called
transferred with 56.60%. However, MDD requires 47.21% extra CPU (TPO) for short. In the first phase, it reduces the transferred memory
than Xen. In addition to the processing cost, it also consumes memory pages at the first iteration by transferring only un-updated memory
resource to cache memory pages during migration. MDD applied dedu- pages. In the second phase, it also reduces the transferred memory
plication in the context of migrating memory within a cluster where pages from the second iteration to the one before last by using his-
storage is shared. Therefore, deduplication can be utilized to achieve torical statistics to predict frequently updated memory pages and avoid
good results over low bandwidth links in WAN environments where a transferring them. In the last one, it transfers the remaining memory
large date is migrated especially in such workloads with more similar- pages. If these remaining memory pages are large, they are compressed
ity. Deduplication is also exploited in inter-rack live migration (IRLM) before transferring using (RLE) algorithm. Experiment results for TPO,

6
M. Noshy et al. Journal of Network and Computer Applications 110 (2018) 1–10

Fig. 6. Agile VM migration: The working set is


only transferred directly on the TCP connection
where the destination host can retrieve cold pages
from the per-VM swap device on-demand (Desh-
pande et al., 2016).

compared with the default pre-copy technique of Xen, show that this cution. Although free memory pages do not affect normal execution,
technique can reduce the total data transferred for high workloads with avoiding to transfer cache memory pages may lead to performance
71%, the total migration time with 70%, and the downtime with 3%. degradation due to inconsistency between actual transferred memory
pages and the kernel state. Therefore, they use a kernel notification
mechanism to handle missing pages by making the kernel aware of
3.4.3. SonicMigration
them. Experiment results compared with the default pre-copy of KVM
Koto et al., (2012) proposed a technique to decrease network traffic
showed that this technique could reduce 72% on average of the total
and total migration time by transferring only the necessary memory
migration time.
pages. This technique avoids transferring soft pages, which can be free
pages or file cache pages, to decrease migration noise. It also uses a
shared memory between VMM and the guest kernel to store and update 4. Research directions
addresses of soft memory pages. To keep data consistent, the kernel is
updated at the VM-suspend phase. At this stage, VMM interrupts the VM This section refers to the main research directions in live VM migra-
before suspending it. After the kernel data is updated, the guest sends tion and their challenges. These research directions include memory
a hypercall to VMM to start the stop and copy phase. This hypercall migration optimization which is the major bottleneck in live migrat-
introduces CPU and memory overhead. Experiment results compared ing VMs over LAN links especially when migrating VMs with memory-
to Xen show that SonicMigration can reduce the total migration time intensive workloads, facing challenges of migration over WAN links to
with 68.3% of and the network traffic with 83.9%. achieve more optimization in the migration process, studying power
cost due to live VM migration which should be small as possible, live
migration of multiple VMs and security issues to ensure that migrated
3.4.4. Agile migration VMs are protected against attacks. Fig. 7 shows the main research direc-
Deshpande et al., (2016) proposed a hybrid technique of pre-copy tions that need more researches for more optimization. The aforemen-
and post-copy for live VM migration to solve quickly the resource pres- tioned research issues and their challenges will be explored in more
sure problem based on agility. This technique reduces the number of detail in the following subsections.
transferred memory pages. It tracks the working set memory pages (hot
memory pages) of each VM to transfer them from the source to the des-
4.1. Optimization in memory migration
tination whereas the non-working set (cold memory pages) are evicted
to a portable per-VM swap device. After resuming execution at the des-
As we explored, there are many techniques to optimize live VM
tination host, the dirtied working set memory pages can be requested
migration process. These techniques try to optimize the migration by
by the destination or actively pushed from the source. The migration of
decreasing the total migration time, total data transferred or down-
memory pages from source to destination is shown in Fig. 6.
time. The main focus of this survey is to discuss different techniques
to enhance migration within the same LAN, where the storage does
3.4.5. Introspection-based memory pruning not need to be transferred from the source host to the destination host
Wang et al., (2017) proposed a technique to reduce total data trans- because it is shared.
ferred and thereby total migration time based on classifying memory Thus, memory migration is the main bottleneck of migration over
pages into five categories. These categories are anonymous, inode, ker- LAN. Although different explored techniques optimized memory migra-
nel, cache and free memory pages. Thus, they transfer only the first tion, it is still a hot research topic to reach to better optimization by uti-
three categories of memory pages, as they are necessary for kernel exe- lizing available resources. This optimization process can be achieved by

Fig. 7. Main research directions in live VM migration.

7
M. Noshy et al. Journal of Network and Computer Applications 110 (2018) 1–10

Table 4
Dimensions of security.
Dimension Issues
Access Control Access control lists are defined to prevent unauthorized activities from attackers.
Authentication Source and destination hosts should be authenticated, users should be trusted, the firewall is applied, and system administrator roles should be used.
Non-Repudiation Activities of hypervisors and users should be monitored.
Data Confidentiality Encryption is used to protect migrated data.
Communication Transmission channel must be protected against active and passive attacks. VPN tunnels are one of the solutions to these attacks.
Data Integrity Attackers may inject malicious codes, manipulate control messages of the migration protocols or migrate a VM to be able to control its users and
processes. Encryption, digital signatures, and checksums can mitigate these attacks.
Availability Unauthorized attackers may migrate many VMs to degrade performance and decrease availability. Packet filtering, access control lists, authentication,
and monitoring can control these attacks.
Privacy VMs, which store users’ data, maybe live migrated to another physical host or a third party without users’ knowledge. Also, Attackers may migrate
malicious VMs to the target host to sniff information of a VM there. VLAN can be a solution to these issues.

designing a technique applying one, or more than one, of the main opti- transferred data. Strunk (2013) proposed a third linear model as a func-
mization, approaches such as (compression, deduplication and check- tion of VM memory size and the network bandwidth between the source
pointing) or using a technique to predict dirty pages of memory in host and the destination host. Dhanoa and Khurmi (2015) analyzed the
advance such as (Sharma and Chawla, 2016; Wu et al., 2017). These effect of VM size and network bandwidth on migration time and con-
proposed techniques may lead to more reduction in the migrated mem- sumed power during live VM migration. Prediction of energy cost due
ory which results in more optimization in total migration time, down- to live VM migration are introduced based on resource utilization of the
time or both. servers in (Rybina and Schill, 2016). The power cost of VM migration is
modeled in (Canali et al., 2017), where CPU overhead and data transfer
due to migration are considered. Thus, reaching to a more accurate pre-
4.2. Migration over WAN
diction model for the power cost of live VM migration will help to take
better migration decisions in the context of data centers power man-
Although live migration over LAN links is the major interest for
agement scenarios. Also, developing optimization techniques for live
many researches in this field, there are also many researches such as
VM migration with less migration time and processing overhead will
(Travostino et al., 2006; Bradford et al., 2007; Harney et al., 2007;
help in minimizing power cost during the migration process.
Mashtizadeh et al., 2014; Arif et al., 2016) that study migration over
WAN links. This research area is more complex than migration over
4.4. Live migration of multiple VMs
LAN links and needs more research efforts. The main difficulty is the
large size of data that need to be transferred, where the storage is not
Many research efforts have been devoted to optimizing the live
shared and must be migrated. This large-sized data suffers from the
migration of a single VM, and also there is a need to live migrate mul-
heterogeneity of network architecture and migration over intermittent
tiple VMs in real applications within data centers. Some optimization
and low-bandwidth links to a distant host. Therefore, finding a way to
techniques such as (Deshpande et al., 2011, 2012; Deshpande and Kea-
decrease the total data transferred over WAN links will increase utiliza-
hey, 2017; Sun et al., 2016; Onoue et al., 2017) have been proposed in
tion of network links and will also decrease the total migration time.
recent years to enhance live migration of multiple VMs. The main con-
Also, using high-speed links can reduce total migration time and down-
cern for these techniques is to reduce the total migration time, which
time which increase migration performance. As we deal with migra-
is the total time from starting migration of the first VM to finishing
tion of the storage over long distances, reliability should be taken into
migration of the last one, and the total downtime which is the total
account to ensure data consistency. Although replication (Ramakrish-
time of periods during which any VM is unresponsive. Thus, schedul-
nan et al., 2007) is the common solution to achieve reliability, erasure
ing the multiple VM migrations is the main challenge to optimize the
codes (Mugisha and Zhang, 2017) can be a good alternative solution
performance. Another important challenge that should be taken into
with less storage overhead (Kralevska et al., 2017). There are also many
account is the interference among VMs, where migrating some of the
researches (Huang et al., 2013; Kralevska et al., 2016; Li et al., 2017)
correlated VMs may degrade service due to network latency especially
seek to efficiently use erasure codes to achieve more reliability in data
over WAN links. Xu et al. (2014) analyzed VM migration interference
storage.
and co-location interference during and after migration on both source
servers and destination servers. In general, introducing techniques that
4.3. Power consumption of live VM migration efficiently utilize available bandwidth to migrate multiple VMs with
least service degradation is an attractive research direction.
Introducing energy-aware contributions in cloud computing (E-eco,
2017; Shojafar et al., 2016a; Shojafar et al., 2016b; More and Ingle, 4.5. Security
2018) has become a hot research topic due to both economic and envi-
ronmental reasons. As we previously pointed out that power manage- Security, especially for WAN links, is a major challenge in VM migra-
ment is one of the major applications for live VM migration in data cen- tion. Migrated VMs must be isolated and secured enough during the
ters, where VMs can be consolidated into less physical servers to mini- migration process, because they may contain encryption keys, pass-
mize power consumption. However, studying consumed power by live words, or any other sensitive data (Garfinkel and Rosenblum, 2005;
VM migration process itself is a significant research point that was over- Perez-Botero, 2011). In WAN links, VMs are transferred to their desti-
looked previously. Thus, more researches have been devoted recently to nation hosts on shared links over long distances and through hetero-
study energy overhead of this process. The additional consumed power geneous network architecture which makes them vulnerable to attacks.
has two parts, one at the source host and the other at the destination Therefore, the source host and the destination host should be trusted.
host. This power cost is due to using more computing and network Also, applying a suitable encryption algorithm will help to protect VMs
resources for accomplishing the migration process. Huang et al. (2011) against malicious activities. The hypervisor, communication channel,
demonstrated power cost of live VM migration based on a linear model and end-users should be secured enough against attacks. The security
depends on CPU utilization. On the other hand, Liu et al. (2013) pro- has some dimensions (Aiash et al., 2014), which are summarized in
posed another linear model to study this cost based on the amount of Table 4.

8
M. Noshy et al. Journal of Network and Computer Applications 110 (2018) 1–10

5. Conclusion Harney, E., Goasguen, S., Martin, J., Murphy, M., Westall, M., 2007. The efficacy of live
virtual machine migrations over the internet. In: Proceedings of the 2Nd
International Workshop on Virtualization Technology in Distributed Computing,
Live VM migration is a powerful tool that helps administrators of VTDC ’07. ACM, pp. 8:1–8:7, https://doi.org/10.1145/1408654.1408662.
cloud data centers to manage their resources effectively. This paper Hines, M.R., Gopalan, K., 2009. Post-copy based live virtual machine migration using
summarized the concept of live VM migration, its advantages, and its adaptive pre-paging and dynamic self-ballooning. In: Proceedings of the 2009 ACM
SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, VEE
approaches. It introduced the main performance metrics to assess live ’09. ACM, pp. 51–60, https://doi.org/10.1145/1508293.1508301.
VM migration. It focused on surveying the state-of-the-art optimization Hines, M.R., Deshpande, U., Gopalan, K., 2009. Post-copy live migration of virtual
techniques according to memory migration, which in general try to machines. SIGOPS Oper. Syst. Rev. 43 (3), 14–26, https://doi.org/10.1145/
1618525.1618528.
minimize total migration time, total data transferred and downtime. It Hu, L., Zhao, J., Xu, G., Ding, Y., Chu, J., 2013. Hmdc: live virtual machine migration
classified the discussed optimization techniques according to the tech- based on hybrid memory copy and delta compression. Appl. Math. Inf. Sci. 7 (2L),
nique they adopt. Optimization techniques in each category was studied 639–646, www.naturalspublishing.com/Article.asp?ArtcID=3263.
Huang, Q., Gao, F., Wang, R., Qi, Z., 2011. Power consumption of virtual machine live
and compared to reach to their strengths and weaknesses. The research
migration in clouds. In: 2011 Third International Conference on Communications
direction issues which they need more researches to optimize the migra- and Mobile Computing, pp. 122–125, https://doi.org/10.1109/CMC.2011.62.
tion process were also discussed. Huang, C., Chen, M., Li, J., 2013. Pyramid codes: flexible schemes to trade space for
access efficiency in reliable data storage systems. Trans. Storage 9 (1), 3:1–3:28,
https://doi.org/10.1145/2435204.2435207.
References Ibrahim, K.Z., Hofmeyr, S., Iancu, C., Roman, E., 2011. Optimized pre-copy live
migration for memory intensive applications. In: Proceedings of 2011 International
Aiash, M., Mapp, G., Gemikonakli, O., 2014. Secure live virtual machines migration: Conference for High Performance Computing, Networking, Storage and Analysis, SC
issues and solutions. In: Proceedings of the 28th International Conference on ’11. ACM, pp. 40:1–40:11, https://doi.org/10.1145/2063384.2063437.
Advanced Information Networking and Applications Workshops, WAINA’14. IEEE, Jin, H., Deng, L., Wu, S., Shi, X., Pan, X., 2009. Live virtual machine migration with
pp. 160–165, https://doi.org/10.1109/WAINA.2014.35. adaptive, memory compression. In: 2009 IEEE International Conference on Cluster
Al-Kiswany, S., Subhraveti, D., Sarkar, P., Ripeanu, M., 2011. Vmflock: virtual machine Computing and Workshops, pp. 1–10, https://doi.org/10.1109/CLUSTR.2009.
co-migration for the cloud. In: Proceedings of the 20th International Symposium on 5289170.
High Performance Distributed Computing, HPDC ’11. ACM, pp. 159–170, https:// Kivity, A., Kamay, Y., Laor, D., Lublin, U., Liguori, A., 2007. Kvm: the linux virtual
doi.org/10.1145/1996130.1996153. machine monitor. In: Proceedings of the Linux Symposium, Vol. 1 of OLS ’07, pp.
Alamdari, J.F., Zamanifar, K., 2012. A reuse distance based precopy approach to 225–230, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.488.2278.
improve live migration of virtual machines. In: 2012 2nd IEEE International Knauth, T., Fetzer, C., 2015. Vecycle: recycling vm checkpoints for faster migrations. In:
Conference on Parallel, Distributed and Grid Computing, pp. 551–556, https://doi. Proceedings of the 16th Annual Middleware Conference, Middleware ’15. ACM, pp.
org/10.1109/PDGC.2012.6449880. 210–221, https://doi.org/10.1145/2814576.2814731.
Arif, M., Kiani, A.K., Qadir, J., 2016. Machine learning based optimized live virtual Koto, A., Yamada, H., Ohmura, K., Kono, K., 2012. Towards unobtrusive vm live
machine migration over wan links. Telecommun. Syst. 64 (2), 245–257, https://doi. migration for cloud computing platforms. In: Proceedings of the Asia-Pacific
org/10.1007/s11235-016-0173-3. Workshop on Systems, APSYS ’12. ACM, pp. 7:1–7:6, https://doi.org/10.1145/
Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, 2349896.2349903.
I., Warfield, A., 2003. Xen and the art of virtualization. ACM SIGOPS - Oper. Syst. Kozuch, M., Satyanarayanan, M., 2002. Internet suspend/resume. In: proceedings of the
Rev. 37 (5), 164–177, https://doi.org/10.1145/1165389.945462. fourth IEEE workshop on mobile computing systems and applications, WMCSA ’02.
Bradford, R., Kotsovinos, E., Feldmann, A., Schiöberg, H., 2007. Live wide-area IEEE Computer Society, p. 40, http://dl.acm.org/citation.cfm?id=832315.837557.
migration of virtual machines including local persistent state. In: Proceedings of the Kralevska, K., Gligoroski, D., Øverby, H., 2016. Balanced locally repairable codes. In:
3rd International Conference on Virtual Execution Environments, VEE ’07. ACM, pp. 2016 9th International Symposium on Turbo Codes and Iterative Information
169–179, https://doi.org/10.1145/1254810.1254834. Processing (ISTC), pp. 280–284, https://doi.org/10.1109/ISTC.2016.7593121.
Canali, C., Lancellotti, R., Shojafar, M., 2017. A computation- and network-aware energy Kralevska, K., Gligoroski, D., Jensen, R.E., Øverby, H., 2017. Hashtag erasure codes:
optimization model for virtual machines allocation. In: Proceedings of the 7th from theory to practice. IEEE Trans. Big Data PP (99), https://doi.org/10.1109/
International Conference on Cloud Computing and Services Science - Volume 1: TBDATA.2017.2749255. 1–1.
CLOSER. INSTICC, SciTePress, pp. 71–81, https://doi.org/10.5220/ Li, R., Hu, Y., Lee, P.P.C., 2017. Enabling efficient and reliable transition from
0006231400710081. replication to erasure coding for clustered file systems. IEEE Trans. Parallel Distr.
Clark, C., Fraser, K., Hand, S., Hansen, J.G., Jul, E., Limpach, C., Pratt, I., Warfield, A., Syst. 28 (9), 2500–2513, https://doi.org/10.1109/TPDS.2017.2678505.
2005. Live migration of virtual machines. In: Proceedings of the 2Nd Conference on Liu, H., Jin, H., Liao, X., Hu, L., Yu, C., 2009. Live migration of virtual machine based on
Symposium on Networked Systems Design & Implementation - Volume 2, NSDI’05. full system trace and replay. In: Proceedings of the 18th ACM International
USENIX Association, pp. 273–286, http://dl.acm.org/citation.cfm?id=1251203. Symposium on High Performance Distributed Computing, HPDC ’09. ACM, pp.
1251223. 101–110, https://doi.org/10.1145/1551609.1551630.
Deshpande, U., Keahey, K., 2017. Traffic-sensitive live migration of virtual machines. Liu, H., Jin, H., Xu, C.-Z., Liao, X., 2013. Performance and energy modeling for live
Future Generat. Comput. Syst. 72 (Suppl. C), 118–128, https://doi.org/10.1016/j. migration of virtual machines. Cluster Comput. 249–264, https://doi.org/10.1007/
future.2016.05.003. s10586-011-0194-3.
Deshpande, U., Wang, X., Gopalan, K., 2011. Live gang migration of virtual machines. Ma, Y., Wang, H., Dong, J., Li, Y., Cheng, S., 2012. Me2: efficient live migration of
In: Proceedings of the 20th International Symposium on High Performance virtual machine with memory exploration and encoding. In: 2012 IEEE International
Distributed Computing, HPDC ’11. ACM, pp. 135–146, https://doi.org/10.1145/ Conference on Cluster Computing, pp. 610–613, https://doi.org/10.1109/CLUSTER.
1996130.1996151. 2012.52.
Deshpande, U., Kulkarni, U., Gopalan, K., 2012. Inter-rack live migration of multiple Mashtizadeh, A.J., Cai, M., Tarasuk-Levin, G., Koller, R., Garfinkel, T., Setty, S., 2014.
virtual machines. In: Proceedings of the 6th International Workshop on Xvmotion: unified virtual machine migration over long distance. In: Proceedings of
Virtualization Technologies in Distributed Computing Date, VTDC ’12. ACM, pp. the 2014 USENIX Conference on USENIX Annual Technical Conference, USENIX
19–26, https://doi.org/10.1145/2287056.2287062. ATC’14. USENIX Association, pp. 97–108, http://dl.acm.org/citation.cfm?id=
Deshpande, U., Chan, D., Guh, T.Y., Edouard, J., Gopalan, K., Bila, N., 2016. Agile live 2643634.2643645.
migration of virtual machines. In: 2016 IEEE International Parallel and Distributed Microsoft, https://www.microsoft.com/en-us/cloud-platform/virtualization, (Accessed
Processing Symposium (IPDPS), pp. 1061–1070, https://doi.org/10.1109/IPDPS. 2 February 2018).
2016.120. Milojičíč, D.S., Douglis, F., Paindaveine, Y., Wheeler, R., Zhou, S., 2000. Process
Deshpande, U., Chan, D., Chan, S., Gopalan, K., Bila, N., 2017. Scatter-gather live migration. ACM Comput. Surv. 32 (3), 241–299, https://doi.org/10.1145/367701.
migration of virtual machines. IEEE Trans. Cloud Comput. PP (99), https://doi.org/ 367728.
10.1109/TCC.2015.2481424. 1–1. More, N.S., Ingle, R.B., 2018. Research issues for energy-efficient cloud computing. In:
Dhanoa, I.S., Khurmi, S.S., 2015. Analyzing energy consumption during vm live Intelligent Computing and Information and Communication. Springer Singapore,
migration, in: international Conference on Computing. Commun. Autom. 584–588, Singapore, pp. 265–272, https://doi.org/10.1007/978-981-10-7245-1_27.
https://doi.org/10.1109/CCAA.2015.7148475. Mugisha, E., Zhang, G., 2017. Reliable multi-cloud storage architecture based on erasure
E-eco, 2017. Performance-aware energy-efficient cloud data center orchestration. J. code to improve storage performance and failure recovery. Inter. J. Adv. Cloud
Netw. Comput. Appl. 78, 83–96, https://doi.org/10.1016/j.jnca.2016.10.024. Comput. Appl. Res. 1 (2), https://doi.org/10.23953/cloud.ijaccar.260.
Garfinkel, T., Rosenblum, M., 2005. When virtual is harder than real: security challenges Nagarajan, A.B., Mueller, F., Engelmann, C., Scott, S.L., 2007. Proactive fault tolerance
in virtual machine based computing environments. In: Proceedings of the 10th for hpc with xen virtualization. In: Proceedings of the 21st Annual International
Conference on Hot Topics in Operating Systems - Volume 10, HOTOS’05. USENIX Conference on Supercomputing, ICS ’07. ACM, pp. 23–32, https://doi.org/10.1145/
Association. pp. 20–20, http://dl.acm.org/citation.cfm?id=1251123.1251143. 1274971.1274978.
Hacking, S., Hudzia, B., 2009. Improving the live migration process of large enterprise Nanda, S., cker Chiueh, T., 2005. A Survey on Virtualization Technologies Technical
applications. In: Proceedings of the 3rd International Workshop on Virtualization Report Department of Computer Science, SUNY at Stony brook, pp. 1–42, http://
Technologies in Distributed Computing, VTDC ’09. ACM, pp. 51–58, https://doi. kabru.eecs.umich.edu/papers/publications/2011/TR179.pdf.
org/10.1145/1555336.1555346.

9
M. Noshy et al. Journal of Network and Computer Applications 110 (2018) 1–10

Nathuji, R., Schwan, K., 2007. Virtualpower: coordinated power management in Wu, T.-Y., Guizani, N., Huang, J.-S., 2017. Live migration improvements by related dirty
virtualized enterprise systems. ACM SIGOPS - Oper. Syst. Rev. 41 (6), 265–278, memory prediction in cloud computing. J. Netw. Comput. Appl. 90 (Suppl. C),
https://doi.org/10.1145/1323293.1294287. 83–89, https://doi.org/10.1016/j.jnca.2017.03.011.
Nelson, M., Lim, B.-H., Hutchins, G., 2005. Fast transparent migration for virtual Xu, F., Liu, F., Liu, L., Jin, H., Li, B., Li, B., 2014. Iaware: making live migration of
machines. In: Proceedings of the Annual Conference on USENIX Annual Technical virtual machines interference-aware in the cloud. IEEE Trans. Comput. 63 (12),
Conference, ATEC ’05. USENIX Association. pp. 25–25, http://dl.acm.org/citation. 3012–3025, https://doi.org/10.1109/TC.2013.185.
cfm?id=1247360.1247385. Zhang, Q., Cheng, L., Boutaba, R., 2010. Cloud computing: state-of-the-art and research
Onoue, K., Imai, S., Matsuoka, N., 2017. Scheduling of parallel migration for multiple challenges. J. Internet Serv. Appl. 1 (1), 7–18, https://doi.org/10.1007/s13174-
virtual machines. In: 2017 IEEE 31st International Conference on Advanced 010-0007-6.
Information Networking and Applications (AINA), pp. 827–834, https://doi.org/10. Zhang, X., Huo, Z., Ma, J., Meng, D., 2010. Exploiting data deduplication to accelerate
1109/AINA.2017.136. live virtual machine migration. In: Proceedings of the 2010 IEEE International
Perez-Botero, D., 2011. A Brief Tutorial on Live Virtual Machine Migration from a Conference on Cluster Computing, CLUSTER ’10. IEEE Computer Society, pp. 88–96,
Security Perspective. University of Princeton, USA, https://pdfs.semanticscholar. https://doi.org/10.1109/CLUSTER.2010.17.
org/b562/a31a55998bab7fc416622e697844ae72f318.pdf. Zhang, F., Fu, X., Yahyapour, R., 2017. Cbase: a new paradigm for fast virtual machine
Ramakrishnan, K.K., Shenoy, P., Van der Merwe, J., 2007. Live data center migration migration across data centers. In: Proceedings of the 17th IEEE/ACM International
across wans: a robust cooperative context aware approach. In: Proceedings of the Symposium on Cluster, Cloud and Grid Computing, CCGrid ’17. IEEE Press, pp.
2007 SIGCOMM Workshop on Internet Network Management, INM ’07. ACM, pp. 284–293, https://doi.org/10.1109/CCGRID.2017.26.
262–267, https://doi.org/10.1145/1321753.1321762.
Riteau, P., Morin, C., Priol, T., 2011. Shrinker: improving live migration of virtual
clusters over wans with distributed data deduplication and content-based Mostafa Noshy is a teaching assistant at Computer Engineer-
addressing. In: Proceedings of the 17th International Conference on Parallel ing and Control Systems Department, Mansoura University,
Processing - Volume Part I, Euro-Par’11. Springer-Verlag, pp. 431–442, Egypt. He received his B.Sc. in 2014 with an overall grade of
http://dl.acm.org/citation.cfm?id=2033345.2033391. excellent with honors from Mansoura University. His research
Rose, R., 2004. Survey of System Virtualization Techniques. Oregon State University, pp. interests include cloud computing, virtualization and live vir-
1–15, http://www.ece.cmu.edu/ece845/docs/rose-virtualization.pdf. tual machine migration.
Rybina, K., Schill, A., 2016. Estimating energy consumption during live migration of
virtual machines. In: 2016 IEEE International Black Sea Conference on
Communications and Networking (BlackSeaCom), pp. 1–5, https://doi.org/10.
1109/BlackSeaCom.2016.7901567.
Sahni, S., Varma, V., 2012. A hybrid approach to live migration of virtual machines. In:
IEEE International Conference on Cloud Computing in Emerging Markets, CCEM ’12.
IEEE, pp. 1–5, https://doi.org/10.1109/CCEM.2012.6354587.
Sharma, S., Chawla, M., 2016. A three phase optimization method for precopy based vm
live migration. SpringerPlus 5 (1), 1022, https://doi.org/10.1186/s40064-016- Abdelhameed Ibrahim was born in Mansoura city, Egypt, in
2642-2. 1979. He attended the Faculty of Engineering, Mansoura Uni-
Shojafar, M., Canali, C., Lancellotti, R., Abawajy, J., 2016. Adaptive versity, in Mansoura, where he received Bachelor and Master
computing-plus-communication optimization framework for multimedia processing Degrees in Engineering from the electronics (Computer Engi-
in cloud systems. IEEE Trans. Cloud Comput. PP (99), https://doi.org/10.1109/TCC. neering and Systems) department in 2001 and 2005, respec-
2016.2617367. 1–1. tively. He was with the Faculty of Engineering, Mansoura Uni-
Shojafar, M., Cordeschi, N., Baccarelli, E., 2016. Energy-efficient adaptive resource versity, from 2001 through 2007. In April 2007, he joined the
management for real-time vehicular cloud services. IEEE Trans. Cloud Comput. PP Graduate School of Advanced Integration Science, Faculty of
(99), https://doi.org/10.1109/TCC.2016.2551747. 1–1. Engineering, Chiba University, Japan, as a doctor student. He
Strunk, A., 2013. A lightweight model for estimating energy cost of live migration of received Ph.D. Degree in Engineering in 2011. His research
virtual machines. In: 2013 IEEE Sixth International Conference on Cloud interests are in the fields of computer vision and pattern recog-
Computing, pp. 510–517, https://doi.org/10.1109/CLOUD.2013.17. nition, with a special interest in material classification based
Sun, G., Liao, D., Anand, V., Zhao, D., Yu, H., 2016. A new technique for efficient live on reflectance information.
migration of multiple virtual machines. Future Generat. Comput. Syst. 55 (Suppl. C),
74–86, https://doi.org/10.1016/j.future.2015.09.005.
Hesham Arafat Ali is a Prof. in Computer Eng. & Sys.; An
Svärd, P., Hudzia, B., Tordsson, J., Elmroth, E., 2011. Evaluation of delta compression
assoc. Prof. in Info. Sys. and computer Eng. He was assis-
techniques for efficient live migration of large virtual machines. SIGPLAN Notices
tant prof. at the Univ. of Mansoura, Faculty of Computer Sci-
46 (7), 111–120, https://doi.org/10.1145/2007477.1952698.
ence in 1997 up to 1999. From January 2000 up to Septem-
Travostino, F., Daspit, P., Gommans, L., Jog, C., de Laat, C., Mambretti, J., Monga, I.,
ber 2001, he was joined as Visiting Professor to the Depart-
van Oudenaarde, B., Raghunath, S., Wang, P.Y., 2006. Seamless live migration of
ment of Computer Science, University of Connecticut. From
virtual machines over the man/wan. Future Generat. Comput. Syst. 22 (8), 901–907,
2002 to 2004 he was a vice dean for student affair the Fac-
https://doi.org/10.1016/j.future.2006.03.007.
ulty of Computer Science and Inf., Univ. of Mansoura. He was
Vmware, http://www.vmware.com, (Accessed 2 February 2018).
awarded with the Highly Commended Award from Emerald
Wang, C., Hao, Z., Cui, L., Zhang, X., Yun, X., 2017. Introspection-based memory
Literati Club 2002 for his research on network security. He is
pruning for live vm migration. Int. J. Parallel Program. 45 (6), 1298–1309, https://
a founder member of the IEEE SMC Society Technical Com-
doi.org/10.1007/s10766-016-0471-0.
mittee on Enterprise Information Systems (EIS). He has many
Whitaker, A., Cox, R.S., Shaw, M., Grible, S.D., 2004. Constructing services with
book chapters published by international press and about 150
interposable virtual hardware. In: Proceedings of the 1st Conference on Symposium
published papers in international (conf. and journal). He has
on Networked Systems Design and Implementation - Volume 1, NSDI’04. USENIX
served as a reviewer for many high quality journals, includ-
Association. pp. 13–13, http://dl.acm.org/citation.cfm?id=1251175.1251188.
ing Journal of Engineering Mansoura University. His interests
Wood, T., Shenoy, P., Venkataramani, A., Yousif, M., 2007. Black-box and gray-box
are in the areas of network security, mobile agent, Network
strategies for virtual machine migration. In: Proceedings of the 4th USENIX
management, Search engine, pattern recognition, distributed
Conference on Networked Systems Design & Implementation, NSDI’07. USENIX
databases, and performance analysis.
Association. pp. 17–17, http://dl.acm.org/citation.cfm?id=1973430.1973447.

10

You might also like