Managing Data in Multimedia Conferencing
Managing Data in Multimedia Conferencing
Managing Data in Multimedia Conferencing
Introduction
The object model represents the multimedia data streams and their
operations. A multimedia stream corresponds to “the data-specific for one or
more media”. For example an object can contain a video data stream and an
audio data stream with a time dependency between them. An Ensemble differs
from a stream or object in that streams don’t contain specifications for related
data, and an object can’t co-ordinate disjoint streams. Streams or objects might
require data to be presented together in real time synchronization, thus imposing
a dependency on the availability that all portions of a presentation moment be
available within some time span. In a multimedia group application, where any
group member can simultaneously be in multiple groups, Ensembles must
observe all the three ordering properties.
Single Source ordering: If messages M1 and M2 originate at the same site, all
receivers within a specific multicast group will receive them in the same order.
Synchronization types
Inter object synchronization: It is the time relation between the multiple media
objects at a single range.
Structure
1. NE.dat: This contains the virtual names assigned to all nodes along with their
network names, a list of all nodes and edges in the conference graph, and a list of
Ensembles names with their corresponding stream names.
2. ERULES.dat: It contains the list of rules expressing the relationship between
the streams and ensembles.
he/she initiates the “conference start” via the command interface. At the
initialization, each E system communicate with each other. The current initiation
Simplifies the overall program structure, letting ECP concentrate on data routing
and coordination.
System Development
Representing Conferences
Let Sn represent the nth stream in Ensemble Ee. Data stream coordination
requires that the ith element of stream S1 is presented simultaneously with the jth
element. A ratio of the stream elements prevents the streams S1 and S2 from
being transmitted
SO1 SO2
S1 S2
ISS2
SO3 ISS1
S3
SO4
S4
ISS3 ISS4
S5
SO5
D1 D2 D3
E1 E2 E2 E1
E=[Sm,Qn]
Minimizing Delay
The figure drawn previously shows three data streams S1, S2, S3
originating at separate locations SO1, S02 andSO3. ECP must offer these streams
which constitute Ensemble E1, all three synchronization types at each of the final
destinations D1, D2 and D3. Streams S4 and S5 compose Ensemble E2. A time
dependency exists between streams E1(1,2,3)(#,$,..), where #, $ etc represents
the relationship of the number of data packets in each stream that must be
transmitted with elements of the other stream to form the Ensemble. Depending
on the type of conference we might or might not want to make the presentation at
destination D1 simultaneous with the presentation at destination D3( delayed
presentation must be desirable in replay systems. The same dependencies exist
for E1.
E-system run a DIDM application using TCP/IP and don’t modify the
underlying operating system or network processing facilities. E-system which are
Flits
Flits are flow control units the header of which deterministically controls
the flow of all succeeding datagrams parts. Flits strongly influenced the
development of the ECP routing scheme. Instead of flits, however the ECP uses a
path node vector(PV) and a distribution vector (DV) both computed at program
initialization. Tha PV determines all destination for each data streams. The DV
determines, for each node, all sites that are one graph distant , and which must
receive the the current data stream segment(or data element). To the best of my
knowledge , the flit and DV have so far implemented in hardware used to
interconnect large scale mesh or cube connected multicomputers.
To control the use and removal of the Ensemble and stream segments ,
each stream is a C++ object. Each object contains a list of available segments in
the stream and arrays with the starting and ending list indexes of the segments in
the current Ensemble. During program execution, the number of segments
available for Ensemble transmission can exceed the number of segments
required for the current Ensemble. This occurs because the various segments
come from different nodes. These index arrays together with Required, Avail,
Avail_S provide the information needed to ensure transmission a complete
Ensemble. In addition because , multimedia buffers can use any stream, ECP
can’t purge the buffer segments until the application program no longer needs the
segments.
Initialization
First create the graph, G, and a LEDA( Library of efficient Data Structures
and Algorithms) graph array, G, representing all nodes in the graph. List source
nodes first, destination nodes second, and intermediate nodes last.
Next compute the adjacency vector (AV) for the current node. This is directly
available using the LEDA function, G.adj_nodes.
For receiver nodes , use the LEDA Graphwin function to create a window
for each Ensemble to be played at this node. Graphwin is a platform independent
mechanism that lets users draw windows and insert graphics and text. Next for
each Ensemble originating at this node store the information computed in 1, 2,
3(below) in the stream object for each node in the stream.
1. Given the streams starting and ending locations determine the path to all the
stream destinations.
2. Create the path node vector, the path node vector is a string of bits, one bit per
graph node. A bit set to 1 if the node is supposed to process (display or
transmit) this stream, otherwise it’s 0.
3. Compute the DV for each stream, the DV is a string of bits, one per graph
node, where a 1 represents a node in the path of the stream from the current
node to the next destination.
Operation
Then for each Ensemble passing through or originating at this node, if all
required Ensembles elements are available (either from the network or from the
multimedia application).Next use the Ensemble’s stream coordination rules to
create a list of elements to be forwarded. Extract the PV from the stream element.
Compute the Distribution Vector doing the logical AND of the bits of Position
Vector with the bits in Adjacency Vector for the current node. Then,
1. Send the elements of this data stream to the nodes represented by 1’s in the
DV, and
2. Update the control arrays to indicate this ensemble has been processed.
Given the directed graph G of the nodes in the figure drawn previously
compute the adjacency matrix for each node at initialization. A modified Dijkstra
subroutine to the graph to determine the routing for every stream and to build the
Distribution Vector to control the routing through the network. Thus, the path for
stream S3 are
SO3, ISS3, D1
SO3, ISS3, D3
Once this paths are found, it is merged to single path list with no
duplicate entries(SO3, D1, D3, ISS3),using the same ordering of nodes the
Adjacency vector is computed which is as written below
where the overbar indicates that the node isn’t in the path for this stream
in the path for this stream in the Ensemble and 001001010010 is the Position
vector for the stream SO3.
To determine the routing from the current node , SO3, I perform a bitwise
AND of the Position vector for S3(001001010010) with the Adjacency vector
for SO3 (000000000010), giving the Distribution vector and (000000000010)
representing ISS3 as this elements next destinations.
ECP creates an object class, Gnodedata, that maintains all the information
about a node and includes multiple occurances of the class Ensemble – one for
each group of data streams the node will process. ECP accesses a Gnodedata
object from the parametric data at a node n of the graph G. When the algorithm
determines that a stream is to be processed , the ECP at the node accesses all the
data pertaining to that stream from within Ensemble E. The Gnodedata maintains
such relevant data such as mapping from the virtual to real nodes and mapping
streams identifiers to and from Ensembles and Adjacency vector. Ensemble E
stores identifiers for it’s streams , Position vectors for each streams and rules for
merging streams, such as those used for synchronizing Ensemble streams in the
Algorithm.
RTP
The rule, by which the size of an ADU is chosen, states that an application
must be able to process each ADU separately and potentially out of order with
respect to other ADUs. So, the loss of some ADUs, even if a retransmission is
triggered, does not stop the receiving application from processing other ADUs.
Therefore and to express data loss in terms meaningful to the application, each
ADU must contain a name that allows the receiver to understand the place of an
ADU in the sequence of ADUs produced by the sender. Hence, RTP data units
carry sequence numbers and timestamps, so that a receiver can determine the
time and sequence relation between ADUs. The ADU is also the main unit of
error recovery. Because each ADU is a meaningful data entity to the receiving
application, the application itself can decide about how to cope with a lost data
It should be noted that RTP itself does not make any QoS commitments,
does not guarantee reliable, timely, or in-order delivery, and does not enforce any
error treatment measures. However, extensions, add reliability to RTP for
applications that cannot tolerate packet loss, for instance white-board
applications. The accompanying RTP Control Protocol (RTCP) facilitates
monitoring of the delivery QoS, and conveys rudimentary information about the
participants of an RTP session.
modification of the RTP message headers that are necessary for a specific
class of applications.
An RTP data packet does not contain any length indication; an application
must either rely on the protocol used beneath RTP to provide framing
mechanisms, or, in case the underlying protocol provides a continuous octet
stream abstraction, define a method of encapsulating RTP packets in the octet
stream.
RTCP control packets are sent periodically (with small random time
variations to avoid traffic bursts) to the same (multicast) host address as the RTP
data packets. Their content facilitates:
• QoS monitoring and congestion control. The primary function of RTCP
messages is to provide feedback on the quality of the data distribution.
The conveyed information can be used for flow and congestion control, to
control adaptive encodings, and to detect network faults.
Applications that produce payload data generate RTCP sender reports.
These reports contain counters of sent packets and octets that allow other
session participants to estimate a sender's data rate.
Applications that have recently received packets issue RTCP receiver
reports, which contain the highest sequence number received, loss
estimations, jitter measures, and timing information needed to calculate
the round-trip delay between the sender and the receiver.
• Media synchronisation. Sender reports contain an NTP timestamp and an
RTP timestamp. The RTP timestamp describes the same instant as the
NTP timestamp but is measured in the same units as the timestamps issued
in the data packets of the sender. These two timestamps allow to
synchronise a receiver's playout clock rate with the sampling clock rate of
the sender. In addition, a receiver can use this RTP/NTP timestamp
correspondence information to synchronise the playout offset delay of
related streams.
• Member identification. RTP data packets and most RTCP packets carry
only an SSRC identifier but convey no other context data about a session
participant. Therefore, RTCP source description packets are sent by both
data senders and receivers. They contain additional information about the
session members, especially the obligatory canonical name (cname),
which identifies a participant throughout all related sessions independently
from the SSRC identifiers. The representation of the canonical name must
be understandable by both humans as well as machines. Other conveyed
information includes email addresses, telephone numbers, the application
name as well as optional application-specific information. An application
that decides to leave a session must transmit an RTCP bye packet to
indicate that it will not participate in the session anymore.
• Session size estimation and control traffic scaling. The total control
traffic generated by all participants in a session should not exceed a small
percentage of the data traffic, typically 5 percent. The control messages
from other session members enable a participant to estimate the session
size and to adapt its control packet rate, so as to remain within the limit of
the control traffic share. So, RTP scales up to large numbers of
participants.
Current implementation
In February 2001, ECP began running on multiple systems. To reduce
extraneous delay factors, the program uses textual data only. This should also
simplify performance measurement. In May 2001, I had enough physical systems
to fully implement the network graph in Figure 1. In August 2001, I began
working with sufficiently large data streams (more than 200 records) to establish
some statistically meaningful information. Raw measurements show that
Ensemble segments arrive within 10 ms of each other from systems that are
within one hop differential from their destination, even though delays between
Ensembles can exceed this by a factor of 4. This work continues.
Configuration
Scalability
Conclusion
References
• http://www.mpi-sb.mpg.de
• http://computer.org/publications/dlib
• csdl.computer.org/comp/mags
• www.starbak.com/news_events/
ABSTRACT
ACKNOWLEDGMENT
Mr. Sminesh (Staff incharge) for their kind co-operation for presenting the
seminar.
CONTENTS
• INTRODUCTION
• COORDINATING DISTRIBUTED MULTIMEDIA DATA
o Synchronization types
• ARCHITECTURE AND IMPLEMENTATION
o Structure
o System Development
• REPRESENTING CONFERENCES
o Minimizing Delay
o Merging and routing Streams
o Flits
• MANAGING STREAM SEGMENTS
o The Ensemble Algorithm
• TRANSMISSION OF DATA AND PROTOCOL USED
o RTP
RTP Payload Transmission
Control and Feedback Information
o Current implementation
o Configuration
o Scalability
• CONCLUSION
• REFERENCES