Load Balancing For Multicast Traffic in SDN Using Real-Time Link Cost Modification
Load Balancing For Multicast Traffic in SDN Using Real-Time Link Cost Modification
of Systems and Computer Engineering, Carleton University, Ottawa, Canada, K1S 5B6
Email: {alexander.craig,bnandy,ioannis}@carleton.ca
Huawei Canada, Ottawa, Canada, K2K 3J1
Email: [email protected]
I.
I NTRODUCTION
5789
M OTIVATION
R ELATED W ORK
5790
group source, and the path selected for the IGMP join messages is reversed to determine the path of multicast packets.
While our work makes use of IGMP, our work is distinct
from Multiflow in that IGMP packets are not directly used
to calculate multicast routes, and IGMP is implemented in a
centralized manner to reduce unnecessary forwarding of IGMP
packets. In [6] the authors focus on improving the reliability
of SDN multicast delivery through the implementation of fast
tree switching. This paper describes a technique in which
multiple backup multicast trees are calculated (using Dijkstras
algorithm) and cached in the network controller for each
multicast group. While our work does not implement redundant
tree calculation, it is similar to this related work in that our
work also uses Dijkstras algorithm for the calculation of
shortest path trees. These related works do not implement realtime bandwidth utilization tracking, nor do they implement
routing based on current traffic state.
IV.
P ROBLEM D ESCRIPTION
Ui
Mi
: Ui = 0
: Ui > 0
(1)
Ci =
1
1(Ui /Mi )
: Ui = 0
: 0 < Ui < Mi
: U i Mi
(2)
C ONTROLLER I MPLEMENTATION
5791
E XPERIMENTAL S ETUP
1:
2:
3:
full topo graph List of all edge tuples corresponding to inter-switch links
Learned from Discovery module
adjacency map 2D map of port numbers, keyed by egress node and ingress node Learned from Discovery module
desired reception state List of tuples of form <recv node, recv node output port> Derived from IGMPManager
events
4:
5:
6:
installed flow nodes List of nodes on which rules are already installed for this sender/destination address
tree src node Node on which the sender for this sender/destination address is connected
weighted topo graph Empty Set
7:
8:
9:
10:
11:
12:
13:
CalculatePathTreeDijkstras: Runs Dijkstras algorithm on the supplied topology graph with the
specified source node. Returns a map of lists where path_tree_map[dst_node] = Set of edges
from source node to destination node
14:
15:
16:
17:
18:
19:
installed flow nodes InstallOpenflowRules(edges to install, desired reception state, installed flow nodes)
Fig. 2: Pseudocode for calculation of a multicast tree for a single combination of sender and multicast destination address by
the GroupFlow module. Nodes correspond to switches, which are identified by their DPID in integer format. Edges correspond
to links, and each edge is stored as a tuple of <egress node DPID, ingress node DPID>.
are used to generate and receive traffic in the network. From
the perspective of the network controller a Mininet emulated
network is indistinguishable from a physical SDN network.
All results presented here were produced by parsing the output
logs of the network controller, and as such the same evaluation
techniques could be applied to a physical SDN network.
Random fully connected topologies with an average node
degree of 4 were generated using the Waxman generator
provided by BRITE [15]. This topology configuration was
selected due to the work in [16] which indicates that random
2-connected topologies (i.e. topologies in which each pair of
nodes can be connected by at least 2 node-disjoint paths) are
most representative of real multicast performance in WAN
networks. Networks with a size of 20 and 40 nodes covering
a 1500km 4000km area (roughly the size of the continental
United States) were generated and evaluated. Both topology
configurations presented similar trends in resulting metrics, and
as such traffic performance metrics will only be presented for
the 40 node network. All links in the network core (i.e. between switches) were uniformly configured with a bandwidth
of 20 Mb/s. To emulate edge networks, a single emulated host
was connected to each switch by a 1 Gb/s link, so that multicast
flows would only be bottlenecked by links in the network
core. Accordingly, the FlowTracker module was configured to
5792
VII.
3.5
2.5
2
ShortestPath Routing
Linear Link Weights
Inv. Proportional Link Weights
1.5
10
15
20
25
30
35
Number of Active Groups
40
45
50
R ESULTS
40 Node Random Topo, Degree 4 Link Util Std.Dev. (20 Mbit/s, 720p Streaming)
4.5
1
parameter of the exponential distribution was set to 60
).
For each receiver arrival event the receiving host was chosen
uniformly at random from all hosts in the network. For the
purpose of calculating the mean occupancy of each group,
this configuration can be considered as an M/M/ queuing
system, as all receivers enter service (i.e. begin receiving the
media stream) as soon as they are initialized, without queuing
delay. The mean occupancy of a M/M/ queue is calculated
as /, where is the parameter to the exponential distribution
used to determine inter-arrival times. In all evaluation shown
5
here, was set to 60
, resulting in a mean inter-arrival time of
12 seconds, and a mean occupancy of 5 receivers per multicast
group. Accordingly, 5 active receivers were generated for each
multicast group prior to the start of trial run. This configuration
allows for realistic churn of multicast receivers, while ensuring
that the occupancy of each group averages at 5 receivers in
steady state. Statistic collection began after all senders and all
initial receivers were initialized, and statistics were collected
for a period of 3 minutes for each trial run. Trials were run
with the number of active groups varied in 5 group increments
from 10 groups to 50 groups.
5.5
5
4.5
4
3.5
3
2.5
2
15
20
25
30
35
Number of Active Groups
40
45
50
TABLE I.
5793
Number of Edges
9.30
10.85
11.13
While the previous metrics demonstrate that these techniques are effective at balancing traffic load in the network,
it is also important to quantify the extent to which these
techniques cause multicast traffic to deviate from the minimum
hop count paths, and thus the extent to which total traffic
volume is increased relative to shortest-path routing. Figure
5 presents the average link utilization among all links in the
network. The 40 node network contains 80 bidirectional links,
which the FlowTracker measures as 160 unidirectional links.
Therefore, the total volume of traffic in the network can be
Average Link Util Across All Links (40 Nodes, 20 Mbit/s Links, 720p Streaming)
7
ShortestPath Routing
Linear Link Weights
Inv. Proportional Link Weights
6
10
15
20
25
30
35
Number of Active Groups
40
45
VIII.
50
200
150
100
50
10
15
20
25
30
35
Number of Active Groups
40
45
50
5794
[15]
[16]
[17]
[18]
[19]
[20]
[21]
R EFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
NoxRepo.
[Online].
Available:
http://www.noxrepo.org/pox/about-pox/ (Accessed: 22 February 2014)
GroupFlow.
[Online].
Available:
https://github.com/alexcraig/GroupFlow
(Accessed:
18
March
2014)
OpenFlow Switch Specification v1.0.0. [Online]. Available: https://www.opennetworking.org/images/stories/downloads/sdnresources/onf-specifications/openflow/openflow-spec-v1.0.0.pdf (Accessed: 22 February 2014)
A. Tootoonchian, M. Ghobadi, and Y. Ganjali, OpenTM: traffic matrix
estimator for OpenFlow networks, in Proc. of 11th International
Conference, PAM 2010. Springer, 2010, pp. 201210.
L. Jose, M. Yu, and J. Rexford, Online measurement of large traffic
aggregates on commodity switches, in Proc of. USENIX Hot-ICE,
2011.
C. Yu, C. Lumezanu, Y. Zhang, V. Singh, G. Jiang, and H. V.
Madhyastha, Flowsense: Monitoring network utilization with zero
measurement cost, in Proc. of 14th International Conference, PAM
2013. Springer, 2013, pp. 3141.
5795