Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

Question Paper Code : 57488

B.E/B.Tech. DEGREE EXAMINATION, MAY/JUNE 2016


Fifth Semester
Information Technology IT 6501 - GRAPHICS AND MULTIMEDIA
(Regulations 2013)
Time : Three Hours Maximum : 100 Marks
Answer ALL questions.
PART - A (10 x 2 = 20 Marks)
1.Define Aspect Ratio.
Aspect ratio is an image projection attribute that describes the proportional
relationship between the width of an image and its height.Many newer television
displays, such as those using HDTV technology, have a widescreen format with an
aspect ratio of 16:9
2.What is composite transformation ?
A composite transformation is when two or more transformations are performed on
a figure (called the preimage) to produce a new figure (called the image).
3.Differentiate orthographic and oblique parallel projection.

4.How do we identify the principal Vanishing Point ?


The vanishing point theorem is the principal theorem in the science of perspective.
It says that the image in a picture plane π of a line L in space, not parallel to the
picture, is determined by its intersection with π and its vanishing point.
5.What are the basic Objects of Multimedia ?
• Text.
• Image.
• Audio.
• Video.
• Animation.
6.Name a few Analog and Digital video broadcast standards.
• The major analog TV standards are NTSC, PAL and SECAM. ... NTSC,
PAL and SECAM are interlaced video standards.
• Digital Video Broadcasting (DVB) is a set of standards that define digital
broadcasting using existing satellite, cable, and terrestrial infrastructures.
7.What is Quantization? Give a brief note.
Quantization is the process of converting a continuous range of values into a finite
range of discreet values. This is a function of analog-to-digital converters, which
create a series of digital values to represent the original analog signal.
8.What is Palette Animation ?
Palette animation is a technique to simulate motion by rapidly changing the colors
of selected entries in a color palette. An application can carry out palette animation
by creating a logical palette that contains "reserved" entries and then using the
AnimatePalette function to change colors in those reserved entries.
9.Define Hypermedia.
A hypermedia system is a multimedia system in which related items of information
are connected and can be presented together.
10.What are the standard types of multimedia Object Servers ?
• Video servers
• Audio servers
• Database servers
PART-B (5 x 16 = 80 Marks)
11. (a)Explain Midpoint Ellipse algorithm in detail with an example.
OR
(b)(i)Explain Cohen-Sutherland line clipping with example.
2D clipping:
Some part of an object (or region) in window coordinates are
clipped(cut-off) is called ‘Clipping or Window Clipping’.
Types of clipping:
• Point clipping
• Line clipping
• Area clipping
• Curve clipping
• Text clipping
• Polygon clipping
Line clipping:
It is the process of removing lines or portion of lines outside an
area of interest.
Types of line-clipping:
• Cohen Sutherland line clipping
• Liang - Barsky line clipping
Cohen Sutherland line clipping algorithm:
• Find Logical AND for two end points , if resultant is 0000 , then
the particular line is either partially visible or completely
visible.
• If line is completely or partially visible , find intersection point
, the equation for y-axis or y-intersection point is

Y=Y1 + m(X-X1)
Where ,
m - slope of the line
To compute x-intersection point ,
X= X1 + 1/m (Y-Y1).
• Based on the intersection point , the line will be clipped off.

Area code: T – top , B – bottom , R – right , L – left .


TBRL T B R T B R
1 001 L L
1 0 0 1 0 1
0 0

TBRL T B R TBRL
0 0 01 L 0 0 10
0 0 0
0
TBRL T B R TBRL
0 10 1 L 01 10
0 1 0
0
Ex: Consider line p1 whose end point is (2,5) and another end point p2 is (8,12)
and find the end point for the visible portion of the line using Cohen Sutherland line
clipping algorithm for against minimum window points is (5,3) and maximum
window points is (10,10).

Solution:
Given,
Minimum window Wmin = (5,3)
Maximum window Wmax = (10,10)
One endpoint p1 = (2,5)
Another endpoint p2= (8,12)

Diagram:

Step i):
Region code for p1 = 0001
Region code for p2 =1000
Logical AND of p1 and p2 is 0000
Since logical AND resultant is 0000 , the line is partially or
completely visible.
Step ii) :
Now the given line which is crossing the vertical line of given window
, so find
Vertical intersection point by using the equation of
Y = Y1 + m(X-X1).
m = (y2-y1)/(x2-x1)
here x1=2 , x2=8 , y1=5 , y2=12
m = (12-5) / (8-2)
m=7/6
Wmin (5,3) has to be considered because the line travelling against
the window
Point (Wmin) whose coordinate is (5,3).
y1=5 , m =7 / 6 , x=5 , x1=2.
Therefore ,
Y= 5+ (7/6) (5-2)
= 5 + 7/6 (3)
=5+7/2
= 17 / 2
= 8.5
p1’=(5,8.5)

Now consider the line p1’ to p2. p2(8,12) , p1’(5,8.5)


Step i) :
Region code for p2 is 1000
Region code for p1’ is 0001
Logical AND of p2 and p1’ is 0000.
So , the line p2 to p1’ is completely or partially visible .
Step ii) :
Now the line is crossing the horizontal line , so find horizontal
intersection
(x-intersection).

X = X1 +(1 / m) (Y-Y1)
m = (y2-y1) / (x2-x1)
m = (8.5 – 12) / (5 – 8)
= (-3.5) / (-3)
m = 1.166
m = 0.857
Therefore ,
X= 8+0.857(10-12)
= 8+0.857(-2)
= 8-1.714
= 6.28
p2’ =(6.28 , 10)
Diagram:

(ii) Discuss in brief: Antialiasing techniques.


Aliasing :
In low resolution screens , the screens line appears like a stair-step .This
effect is known as aliasing .
Antialiasing :
The aliasing effect can be reduced by adjusting intensities of the pixels
along the line. The process of adjusting intensities of the pixels along the line to
minimize the effect of aliasing is called ‘antialiasing’.
Antialiasing methods:
• Supersampling (Post filtering)
• Considering Zero line width.
• Considering finite line width.
• With pixel weighing mask.
• Area sampling (Pre filtering)
• Unweighted
• Weighted
• Filtering techniques
• Pixel phasing

• Supersampling (Considering Zero Line width):


It can be performed in several ways,
• For the gray scale display of a light segment , we
can divide each pixel into a number of sub pixels
and count the number of subpixels that are along
the line path . The intensity level for each pixel is
then set to a value that is proportional to this pixel
count.
• More intensity levels can be used to antialias the
line. To this we need to increase the number of
sampling positions across each pixel.
• Ex : sixteen subpixels can give four intensity levels
above zero and twenty five subpixels can give us
five levels and so on.
• Supersampling Considering Finite line width:
• Most of the line width is equal to the size of the pixel
instead of zero width.
• Here supersampling can be performed by , setting
each pixel intensity proportional to the number of
subpixels inside the polygon representing the line
area.
• A subpixel can be considered to be inside the line if
its lower left corner is inside the polygon
boundaries.
• Supersampling with Pixel Weighting mask:
• Supersampling method often implemented by giving
more weights to subpixels near the center of the pixel
area, since these subpixels is more important to
determine the overall intensity of a pixel.
• Center subpixels has four time weight than that of the
corner subpixels twice that of the remaining pixels.
1 2 1
2 4 2
1 2 1
• An array of values specifying the relative weights of
subpixels is referred as a ‘mask’ of subpixel weights.
• Area Sampling (Unweighted):
• In antialiasing , instead of picking closed pixel , both pixels
are highlighted.
• In this , the intensity of pixel is proportional to the amount of
line area occupied by the pixel.
• Area Sampling (Weighted) :
• In this , equal areas contributed unequally . ie., a
smaller area to the pixel center has greater intensity
than which is at a greater distance.
• Hence the intensity is dependant on the line area
occupied and the distance of area from the pixel’s
center.
• Filtering technique:
• It is a more accurate method for antialiasing lines.
• It is similar to applying a weighted pixel mask , but a
continuous weighting surface covering the pixel is
considered.
• Pixel Phasing :
• Another way to antialias the raster objects is to shift the
display location of pixel areas . this technique is called
“pixel phasing”.
• It is applied by “micropositioning” the electron beam in
relation to object geometry.
12. (a)(i) What are the properties of Bezier Curves ?
• The properties of Bezier curves are ,
• The basic functions are real.
• Bazier curve always passes through the first and last
control points ie., curve has same end points as the
guiding polygon.
• The degree of the polynomial defining the curve
segment is one less than the number of defining the
polygon point. Therefore 4 control points , the degree
of the polynomial is three ie., cubic polynomial.
• The curve generally follows the shape of the defining
polygon.
• The direction of the tangent vector at the end points is
same as that of the vector determined by first and last
segments.
• The curve lies entirely within the convex hull formed
by four control points.
• The convex hull property for a Bezier curve ensures
that the polynomial smoothly follows the control
points .
• The curve exhibits diminishing property. This means
that the curve does not oscillate about any straight line
more often the defining polygon.
(ii)Explain about Perspective Projection method.
• It produces ‘realistic views’ but does not preserve realistic
proportions.
• The lines of projection are not parallel.
• Instead they all converge at a single point called the ‘center
of projection’ or ‘projection reference point’.
• The object positions are transformed to the view plane along
these converged projection lines and the projected view of an
object is determined by calculating the intersection of the
converged projection lines with the view plane.

• Parametric equation for projection are ,


x’ =x-xu
y’=y-yu 1
z’=z-(z-zprp)u

u varies from 0 to 1
when u=0
x’=x ;
y’=y ;
z’=z
when u=1
x’=x-x =0
y’=y-y=0
z’=z-(z- zprp)
=z-z + zprp
z’=zprp
consider , z’=z-(z-zprp)u
u = z’-z
-(z-zprp)
= z’-z
-z + zprp
u= z’-z
zprp-z
where z’=zvp
therefore , u=zvp-z
zprp-z
sub u in eqn 1 ,
on substituting u , we get,
x’=x zprp-zvp
Zprp-z
y’=y zprp-zvp
zprp-z
z’ =zvp
let dp =zprp-zvp
Therefore x’=x dp
zprp-z
y’ =y dp
zprp-z
z’ =zvp
let h=zprp-z
dp
xp=xh xh=xp.h
h
=x dp zprp-z
zprp-z dp
xh=x
yp=yh yh=yp.h
h
=y dp zprp-z
Zprp-z dp
yh =y
zvp=zh zh=zvp.h
h
=zvp zprp-z
dp
= zvp.zprp - z.zvp
dp dp
where zvp.zprp is constant.
dp
zh= - z.zvp
dp
xh = 1 0 0 0 x
yh = 0 1 0 0 y
zh = 0 0 -zvp zvpzprp z
dp dp
h = 0 0 -1 zprp 1
dp dp

OR
(b)(i)Explain the Depth-Buffet method for fisible surface detection
(ii)Write notes on YIQ and HSV color Models.
YIQmodel:

HSV model:
13. (a) (i) Write about the use of Full-motion Video in Multimedia
applications.
Full-Motion Video Management
Use of full-motion video for information repositories and memos are more
informative. More information can be 'conveyed and explained in a short full-
motion video clip than can be conveyed In a long text document. Because a picture
is equivalent to thousand words.
Full Motion video Authoring System
An authoring system is an important component of a multimedia messaging
system. A good authoring system must provide a number of tools for the creation
and editing of multimedia objects. The subset of tools that are necessary are listed
below:
1. A video capture program - to allow fast and simple capture of digital video from
analog
sources such as a video camera or a video tape. .
2. Compression and decompression Interfaces for compressing the captured video
as it is being captured.
3. A video editor with the ability to decompress, combine, edit, and compress
digital video clips.
4. Video indexing and annotating software for marking sections of a videoclip and
recording annotations.
Identifying and indexing video clips for storage. Full-Motion Video Playback
Systems The playback system allows the recipient to detach the embedded vIdeo
reference object, Interpret its contents and retrieve the actual video clip from a
specialized video server and launch the Playback application. A number of factors
are involved in playing back the video correctly. They are:
1.How the compression format used for the storage of the video clip relates to the
available hardware and software facilities for decompression.
2.Resolution of the screen and the system facilites available for managing display
windows. The display resolution may be higher or lower than the resolution of the
source of the video clip.
3.The CPU processing power and the expected level of degradation as well as
managing the degraded output on the fly.
4.Ability to determine hardware and software facilities of the recipient's system,
and adjusting playback, parameters to provide the best resolution and perfonnance
on playback. The three main technologies for playing full motion video are
microsoft's video for windows: Apple's Quicktime, and Intel's Indeo.
Video for Windows (VFW): It is the most common environment for multimedia
messaging. VFW provides capture, edit, and playback tools for full-motion video.
The tools provided by VFW are: The VidCap tool, designed for fast digital video
capture. The VidEdit tool designed for decompression, edition, and compressing
full-motion digital video. The VFW playback tool. The VFW architecture uses
OLE. With the development of DDE and OLE, Microsoft introduced in windows
the capability to link or multimedia objects in a standardized manner. Hence
variety :;windows based applications can interact with them. We can add full-
motion video to any windows-based application with the help of VFW. The VFW
playback tool is designed to use a number of codecs (software encoder/decoders)
for decompressing and playing video files. The default is for A VI files.
Apple's QuickTime
An Apple QuickTime product is also an integrated system for playing back video
files. The QuickTime product supports four compression methodologies.
Intel's Indeo
Indeo is a digital video recording format. It is a software technology that reduces
the size of un compressed video files through successive compression
methodologies, including YUV sub sampling, vector quantization, Huffman's run-
length encoding, and variable content encoding. Indeo technology is designed to be
scalable for playing back video; It determines the hardware available and optimizes
playback for the hardware by controlling the frame rate. The compressed file must
be decompressed for playback. The Indeo technology decompresses the video file
dynamically in real time for playback. Number of operating systems provide Indeo
technology as standard feature and with other software products (eg. VFW).
(ii) Write about Multimedia System Architecture.
Multimedia encompasses a large variety of technologies and integration of
multiple architectures interacting in real time. All of these multimedia capabilities
must integrate with the standard user interfaces such as Microsoft Windows.
The following figure describes the architecture of a multimedia workstation
environment.
In this diagram.
The right side shows the new architectural entities required for supporting
multimedia applications. For each special devices such as scanners, video cameras,
VCRs and sound equipment-, a software device driver is need to provide the
interface from an application to the device. The GUI require control extensions to
support applications such as full motion video
High Resolution Graphics Display:
The various graphics standards such as MCA, GGA and XGA have demonstrated
the increasing demands for higher resolutions for GUls.
Combined graphics and imaging applications require functionality at three levels.
They are provided by three classes of single-monitor architecture.
• VGA mixing: In VGA mixing, the image acquisition memory serves as the
display source memory, thereby fixing its position and size on screen:
• VGA mixing with scaling: Use of scalar ICs allows sizing and positioning
of images in predefined windows. Resizing the window causes the things to
be retrieved again.
• Dual-buffered VGA/Mixing/Scaling: Double buffer schemes maintain the
original images in a decompression buffer and the resized image in a display
buffer.
The IMA Architectural Framework:
The Interactive Multimedia Association has a task group to define the architectural
framework for multimedia to provide interoperability. The task group has
C0ncentrated on the desktops and the servers. Desktop focus is to define the
interchange formats. This format allows multimedia objects to be displayed on any
work station.
The architectural approach taken by IMA is based on defining interfaces to a
multimedia interface bus.
This bus would be the interface between systems and multimedia sources. It
provides streaming I/O service"s, including filters and translators describes the
generalized architectural approach.
Network Architecture for Multimedia Systems:
Multimedia systems need special networks. Because large volumes of images and
video messages are being transmitted.
Asynchronous Transfer Mode technology (A TM) simplifies transfers across LANs
and W ANs.
Task based Multi level networking:
Higher classes of service require more expensive components in the' workstations
as well as in the servers supporting the workstation applications.
Rather than impose this cost on all work stations, an alternate approach is to adjust
the class of service to the specific requirement for the user. This approach is to
adjust the class of services according to the type of data being handled at a time
also.
We call this approach task-based multilevel networking.
High speed server to server Links:
Duplication: It is the process of duplicating an object that the user can manipulate.
There is no requirement for the duplicated object to remain synchronized with the
source (or master) object.
Replication: Replication is defined as the process of maintaining two or more
copies of the same object in a network that periodically re-synchronize to provide
the user faster and more reliable access to the data Replication is a complex
process.
Networking Standards:
The two well-known networking standards are
1. Ethernet
2. token ring.
ATM and FDDI are the two technologies which are going to be discussed in detail.
ATM:
· ATM is a acronym for Asynchronous Transfer Mode. It's topology was
originally designed for broadband applications in public networks.
· ATM is a method of multiplexing and relaying (cell-switching) 53 byte cells. (48
bytes of user information and 5 bits of header information).
· It has been increasingly used for transferring real time multimedia data in local
network at a speed higher than 100Mbits/sec. ANSI has adopted ATM as the cell
switching standard.
· Cell Switching: It is a form of fast packet switching based on the us of cells.
Cells: Short, fixed length packets are called cells.
· The ANSI standard for FDDI allows large-distance networking. It can be used as
highperformance backbone networks to complement and extend current LANs.
· ATM provides high capacity, low-latency switching fabric for data. It is
independent of protocol and distances. ATM effectively manage a mix of data
types, including text data, voice, images and full motion video. ATM was proposed
as a means of transmitting multimedia applications over asynchronous networks.
FDDI:
· FDDI is an acronym of Fiber Distributed Data Interface. This FDDI network is
an excellent candidate to act as the hub in a network configuration, or as a
backbone that interconnects different types of LANs.
· FDDI presents a potential for standardization for high speed networks. · The
ANSI (American National Standard Institute) standard for FDDI allows for single-
mode fiber supporting up to 40 km between stations.
· It extends the current LAN speed from 100 Mbits/sec to several Gigabits per
seconds, and large-distance networking.
OR
(b)(i)Write about the data per Multimedia Systems.
The basic data types of object using in multimedia include text, image, audio,
holograms and full motion video.
TEXT It is the simplest of data types and requires the least amount of storage.
Text is the base element of a relational database. It is also the basic building of a
document. The major attributes of text include paragraph styling, character styling,
font families and sizes, and relative location in a document.
HYPERTEXT It is an application of indexing text to provide a rapid search of
specific text strings in one or more documents. It is an integral component of
hypermedia documents. A hypermedia document is the basic complex object of
which text is a sub object. Sub-objects include images, sound and full motion
video. A hypermedia document always has text and has one or more other types of
sub-objects.
IMAGES Image object is an object that is represented in graphics or encoded
form. Image object is a sub object of the hypermedia document object. In this
object, there is no direct relationship between successive representations in time.
The image object includes all data types that are not coded text. It do not have a
temporal property associated with them. Thc data types such as document images,
facsimile systems, fractals, bitmaps, meta files, and still pictures or still video
frames are grouped together.
Non-Visible: This type of images are not stored as images. But they are displayed
as images. Example: Pressure gauges, and temperature gauges.
Abstract: Abstract images are computer-generated images based on some
arithmetic calculations. They are really not images that ever existed as real-world
objects. Example of these images is fractals.
AUDIO AND VOICE Stored-Audio and Video objects contain compressed audio
information. This can consist of music, speech, telephone conversation and voice
commands. An Audio object needs to store information about the sound clip.
Information here means length of the sound clip, its compression algorithm,
playback characteristics, and any annotations associated with the original clip.
FULL MOTION AND LIVE VIDEO Full motion video refers to pre-stored
video clips. Live video refers to live and it must be processed while it is being
captured by the camera. . From a storage perspective, we should have the
information about the coding algorithm used for compression. It need decoding
also. From a processing perspective, video should be presented to user with smooth
and there should not be any unexpected breaks. Hence, video object and its
associated audio object must be transferred over the network to the decompression
unit. It should be then played at the fixed rate specified for it. For successful
playback of compressed video, there are number of technologies. They are
database storage, network media and protocols, decompression engines and display
engines.
(ii)Explain different file formats used in Multimedia
Rich Text Format
This format extends the range of information from one word processor application
or DTP system to another.
The key format information carried across in RTF documents are given below:
Character Set: It determines the characters that supports in a particular
implementation.
Font Table: This lists all fonts used. Then, they are mapped to the fonts available
in receiving application for displaying text.
Color Table: It lists the colors used in the documents. The color table then mapped
for display by receiving application to the nearer set of colors available to that
applications.
Document Formatting: Document margins and paragraph indents are specified
here.
Section Formatting: Section breaks are specified to define separation of groups of
paragraphs.
Paragraph Formatting: It specifies style sheds. It specifies control characters for
specifying paragraph justification, tab positions, left, right and first indents relative
to document margins, and the spacing between paragraphs.
General Formatting: It includes footnotes, annotations, bookmarks and pictures.
Character Formatting: It includes bold, italic, underline (continuous, dotted or
word), strike through, shadow text, outline text, and hidden text.
Special Characters: It includes hyphens, spaces, backslashes, underscore and so on
TIFF File Format
TIFF is an industry-standard file format designed to represent raster image data
generated by scanners, frame grabbers, and paint/ photo retouching applications.
TIFF Version 6.0 . It offers the following formats:
(i)Grayscale, palette color, RGB full-color images and black and white.
(ii) Run-length encoding, uncompressed images and modified Huffman data
compression
schemes. The additional formats are: Tiled images, compression schemes, images
using CMYK, YCbCr color models.
TIFF Structure
TIFF files consists of a header. The header consists of byte ordering flag, TIFF file
format version number, and a pointer to a table. The pointer points image file
directory. This directory contains table of entries of various tags and their
information. TIFF file format Header:
The next figure shows the IFD (Image File Directory) as its content. The IFD is
avariable –length table containing directory entries. The length of table depends on
the number of directory entries in the table. The first two bytes contain the total
number of entries in the table followed by directory entrie. Each directory entry
consists of twelve bytes.The Last item in the IFD is a four byte pointer that points
to the next IFD.
The byte content of each directory entry is as follows:
· The first two byte contains tag number-tag ID.
· The second two byte represent the type of data as shown in table3-1 below.
· The next four bytes contains the length for the data type.
· The final four bytes contain data or a pointer.
TIFF Tags
The first two bytes of each directory entry contain a field called the Tag ID.
Tag IDs arc grouped into several categories. They are Basic, Informational,
Facsimile, Document storage and Retrieval.
TIFF Classes: (Version 5.0)It has five classes
1. Class B for binary images
2. Class F for Fax
3. Class G for gray-scale images
4. Class P for palette color images
5. Class R for RGB full-color images.
MIDI File Format
The MIDI file format follows music recording metaphor to provide the means of
storing separate tracks of music for each instrument so that they can be read and
synchronized when they are played. The MIDI file format also contains chunks
(i.e., blocks) of data. There are two types of chunks:
• header chunks
• track chunks.
Header Chunk
It is made up of 14 bytes .
The first four-character string is the identifier string, "MThd" .
The second four bytes contain the data size for the header chunk. It is set to a fixed
value of six bytes.
The last six bytes contain data for header chunk.
Track chunk
The Track chunk is organized as follows:
1.The first 4-character string is the identifier.
2.The second 4 bytes contain track length.
MIDI Communication Protocol
This protocol uses 2 or more bytes messages.
The number of bytes depends on the types of message. There are two types of
messages:
(i) Channel messages and (ii) System messages.
Channel Messages
A channel message can have up to three bytes in a message. The first byte is called
a status byte, and other two bytes are called data bytes. The channel number, which
addresses one of the 16 channels, is encoded by the lower nibble of the status byte.
Each MIDI voice has a channel number; and messages are sent to the channel
whose channel number matches the channel number encoded in the lower nibble of
the status byte. There are two types of channel messages: voice messages and the
mode messages.
Voice messages
Voice messages are used to control the voice of the instrument (or device); that is,
switch the notes on or off and sent key pressure messages indicating that the key is
depressed, and send control messages to control effects like vibrato, sustain, and
tremolo. Pitch wheel messages are used to change the pitch of all notes
.Mode messages
Mode messages are used for assigning voice relationships for up to 16 channels;
that is, to set the device to MOWO mode or POLY mode. Omni Mode on enables
the device to receive voice messages on all channels.
System Messages
System messages apply to the complete system rather than specific channels and
do not contain any channel numbers. There are three types of system messages:
common messages, real-time messages, and exclusive messages. In the following,
we will see how these messages are used.
Common Messages
These messages are common to the complete system. These messages provide for
functions such as select a song, setting the song position pointer with number of
beats, and sending a tune request to an analog synthesizer.
System Real Time Messages
These messages are used for setting the system's real-time parameters. These
parameters include the timing clock, starting and stopping the sequencer, resuming
the sequencer from a stopped position, and resetting the system.
System Exclusive messages
These messages contain manufacturer-specific data such as identification, serial
number, model number, and other information. Here, a standard file format is
generated which can be moved across platforms and applications. JPEG Motion
Image: JPEG Motion image will be embedded in A VI RIFF file format. There are
two standards available:
• MPEG ~ In this, patent and copyright issues are there.
• MPEG 2 ~ It provide better resolution and picture quality
TWAIN
A standard interface was designed to allow application to interface with different
types of input devices such as scanners, digital still cameras, and so on, using a
generic TWAIN interface without creating device- specific driver. The benefits of
this approach areas follows
1. Application developers can code to a single TWAIN specification that allows
application to interface to all TWAIN complaint input devices.
2. Device manufactures can write device drivers for their proprietary devices to the
TWAIN specification , allow the devices to be used by and, by complying all
TWAIN-compliant application.
TWAIN Specification Objectives
The TWAIN specification was started with a number of objectives:
· Supports multiple platforms: including Microsoft Windows, Apple Macintosh
OSSystem6.xor7.x, UNIX, andIBMOSl2.
· Supports multiple devices: including scanners, digital camera, frame grabbers etc.
· Standard extendibility and backward compatibility: The TWAIN architecture is
extensible for new types of devices and new device functionality. New versions of
the specification are backward compatible.
· Easy to use: The standard is well documented and easy to use.
The TWAIN architecture defines a set of application programming interfaces
(APls) and a protocol to acquire data from input devices. It is a layered architecture
consisting of a protocol layer and an acquisition layer sandwiched between the
application and device layers. The protocol layer is responsible for communication
between the application and acquisition layers. The acquisition layer contains the
virtual device driver to control the device. This virtual layer is also called the
source.
The Twain architecture defines a set of application programming interfaces (APls)
and a protocol to acquire data from input devices. It is a layered architecture. It has
application layer, the protocol layer, the acquisition layer and device layer.
Application Layer: A TWAIN application sets up a logical connection with a
device. TWAIN does not impose any rules on the design of an application.
However, it set guidelines for the user interface to select sources (logical device)
from a given list of logical devices and also specifies user interface guidelines to
acquire data from the selected sources.
The Protocol Layer: The application layer interfaces with the protocol layer. The
protocol layer is responsible for communications between the application and
acquisition layers. The protocol layer does not specify the method of
implementation of sources, physical connection to devices, control of devices , and
other device-related functionality. This clearly highlights that applications are
independent of sources. The heart of the protocol layer, as shown in Figure is the
Source Manager. It manages all sessions between an application and the sources,
and monitors data acquisition transactions.
The functionality of the Source Manager is as follows:
· Provide a standard API for all TWAIN compliant sources
· Provides election of sources for a user from within an application
· Establish logical sessions between applications and sources, and also manages
essions between multiple applications and multiple sources
· Act as a traffic cop to make sure that transactions and communication are routed
to appropriate sources, and also validate all transactions
· Keep track of sessions and unique session identities
· Load or unload sources as demanded by an application
· Pass all return code from the source to the application
· Maintain a default source
The Acquisition Layer: The acquisition layer contains the virtual device driver, it
interacts directly with the device driver. This virtual
layerisalsocalledthesource.Thesourcecanbelocalandlogicallyconnected to a local
device, or remote and logically connected to a remote device(i.e.,a device ove rthe
network). The source performs the following functions:
~ Control of the device.
~ Acquisition of data from the device.
~ Transfer of data in agreed (negotiated) format. This can be transferred in native
format or another filtered format.
~ Provision of a user interface to control the device.
The Device Layer: The purpose of the device driver is to receive software
commands and control the device hardware accordingly. This is generally
developed by the device manufacturer and shipped with the device.
NEW WAVE RIFF File Format: This format contains two subchunks: (i)
Fmt (ii) Data.
It may contain optional subchunks: (i) Fact (ii) Cue points (iii)Play list (iv)
Associated datalist.
Fact Chunk: It stores file-dependent information about the contents of the WAVE
file.
Cue Points Chunk: It identifies a series of positions in the waveform data stream.
Playlist Chunk: It specifies a play order for series of cue points.
Associated Data Chunk: It provides the ability to attach information, such as
labels, to sections of the waveform data stream.
Inst Chunk: The file format stores sampled sound synthesizer's samples.
14.(a) Explain about JPEG Compression.
ISO and CCITT working committee joint together and formed Joint Photographic
Experts Group. It is focused exclusively on still image compression. Another joint
committee, known as the Motion Picture Experts Group (MPEG), is concerned
with full motion video standards.
JPEG is a compression standard for still color images and grayscale images,
otherwise known as continuous tone images. JPEG has been released as an ISO
standard in two parts
Part I specifies the modes of operation, the interchange formats, and the
encoder/decoder specifies for these modes along with substantial implementation
guide lines . Part 2 describes compliance tests which determine whether the
implementation of an encoder or decoder conforms to the standard specification of
part I to ensure interoperability of systems compliant with JPEG standards
Requirements addressed by JPEG
• The design should address image quality .
• The compression standard should be applicable to practically any kind of
continuous-tone digital source image .
• It should be scalable from completefy lossless to lossy ranges to adapt it. It
should provide sequential encoding .
• It should provide for progressive encoding .
• It should also provide for hierarchical encoding .
• The compression standard should provide the option of lossless encoding so
that images can be guaranteed to provide full detail at the selected resolution
when decompressed.
Definitions in the JPEG Standard
The JPEG Standards have three levels of definition as follows:
* Base line system
* Extended system
* Special lossless function.
The base line system must reasonably decompress color images, maintain a high
compression ratio, and handle from 4 bits/pixel to 16 bits/pixel. The extended
system covers the various encoding aspects such as variable-length encoding,
progressive encoding, and the hierarchical mode of encoding. The special lossless
function is also known as predictive lossless coding. It ensures that at the
resolution at which the image is no loss of any detail that was there in the original
source image.
Overview of JPEG Components JPEG Standard components are:
(i) Baseline Sequential Codec
(ii) OCT Progressive Mode
(iii) Predictive Lossless Encoding
(iv) Hierarchical Mode.
These four components describe four different levels of JPEG compression. The
baseline sequential code defines a rich compression scheme the other three modes
describe enhancements to this baseline scheme for achieving different results.
Some of the terms used in JPEG methodologies are:
Discrete Cosine Transform (DCT)
DCT is closely related to Fourier transforms. Fourier transforms are used to
represent a two dimensional sound signal. DCT uses a similar concept to reduce
the gray-scale level or color signal amplitudes to equations that require very few
points to locate the amplitude in Y-axis X-axis is for locating frequency.
DCT Coefficients
The output amplitudes of the set of 64 orthogonal basis signals are called OCT Co-
efficients. Quantization. This is a process that attempts to determine what
information can be safely discarded without a significant loss in visual fidelity. It
uses OCT co-efficient and provides many-to-one mapping. The quantization
process is fundamentally lossy due to its many-to-one mapping.
De Quantization
This process is the reverse of quantization. Note that since quantization used a
many-to- one mapping, the information lost in that mapping cannot be fully
recovered
Entropy Encoder / Decoder
Entropy is defined as a measure of randomness, disorder, or chaos, as well as a
measure of a system's ability to undergo spontaneous change. The entropy encoder
compresses quantized DCT co-efficients more compactly based on their spatial
characteristics. The baseline sequential. codec uses Huffman coding. Arithmetic
coding is another type of entropy encoding Huffman Coding Huffman coding
requires that one or more sets of huff man code tables be specified by the
application for encoding as well as decoding. The Huffman tables may be pre-
defined and used within an application as defaults, or computed specifically for a
given image.
Baseline Sequential codec
It consists of three steps: Formation of DCT co-efficients quantization, and entropy
encoding. It is a rich compression scheme.
DCT Progressive Mode
The key steps of formation of DCT co-efficients and quantization are the same as
for the baseline sequential codec. The key difference is that each image component
is coded in multiple scans instead ofa single scan.
Predictive Lossless Encoding
It is to define a means of approaching lossless continuous-tone compression. A
predictor combines sample areas and predicts neighboring areas on the basis of the
sample areas. The predicted areas are checked against the fully loss less sample for
each area. The difference is encoded losslessly using huffman on arithmetic
entropy encoding .
Hierarchical Mode
The hierarchical mode provides a means of carrying multiple resolutions. Each
successive encoding of the image is reduced by a factor of two, in either the
horizontal or vertical dimension.
JPEG Methodology
The JPEG compression scheme is lossy, and utilizes forward discrete cosine
transform (or forward DCT mathematical function), a uniform quantizer, and
entropy encoding. The DCT function removes data redundancy by transforming
data from a spatial domain to a frequency domain; the quantizer quantizes DCT co-
efficients with weighting functions to generate quantized DCT co-efficients
optimized for the human eye; and the entropy encoder minimizes the entropy of
quantized DCT co-efficients. The JPEG method is a symmetric algorithm. Here,
decompression is the exact reverse process of compression.

(b) Explain in detail about RAID technology for storage for multimedia
Systems.
RAID (Redundant Array of Inexpensive Disks) It is an alternative to mass storage
for multimedia systems that combines throughput speed and reliability
improvements. RAID is an array of multiple disks. In RAID the data is spread
across the drives. It achieves fault tolerance, large storage capacity and
performance improvement. If we use RAID as our hot backups, it will be
economy. A number of RAID schemes havebeen developed:
1.Hot backup of disk systems
2.Large volume storage at lowercost
3.Higher performance at lower cost
4.Ease of data recovery
5.High MTBF.
There are six levels of RAID available.
(i) RAID Level 0 Disk Striping
It spreads data across drives. Data is striped to spread segments of data across
multiple drives. Data striping provides high transfer rate. Mainly, it is used for
database applications.
RAID level 0 provides performance improvement. It is achieved by overlapping
disk reads and writes. Overlapping here means, while segment I is being written to
drive 1, segment 2 writes can be initiated for drive 2. The actual performance
achieved depends on the design of the controller and how it manages disk reads
and writes.
2.RAID Level 1 Disk Mirroring
The Disk mirroring causes two copies of every file to be written on two separate
drives. (Data redundancy is achieved).
These drives are connected to a single disk controller. It is useful in mainframe and
networking systems. Apart from that, if one drive fails, the other drive which has
its copy can be used.
Performance: Writing is slow. Reading can be speeded up by overlapping seeks.
Read transfer rate and number ofI/O per second is better than a single drive.
I/Otransfer rate (Bandwidth) = No. of drives x drive I/O transfer rate
No of I/O’s Per second =I Otransfer rate / Averagesizeoftransfer
Uses:
Provide backup in the event of disk failures in file servers.
Another form of disk mirroring is Duplexing uses two separate controllers,this
sectioned controller enhances both fault tolerance and performance.
3.RAID Level 2, - Bit interleaving of Data:
It contains arrays of multiple drives connected-to a disk array controller. Data
(written one bit at a time) is bit interleaved across multiple drives. Multiple check
disks are used to detect and correct errors . It provides the ability to handle very
large files, and a high level of integrity and reliability. It is good for multimedia
system. RAID Level 2 utilizes a hamming error correcting code to correct single-
bit errors and doublebit errors.
Drawbacks:
(i) It requires multiple drives for error correction (ii) It is an expensive approach to
data redundancy. (iii) It is slow.
Uses: It is used in multimedia system. Because we can store bulk of video and
audio data.
4.RAID Level-3 Parallel Disk Array:
RAID 3 subsystem contains an array of multiple data drives and one parity drive,
connected to a disk array controller. The difference between RAID 2 and RAID 3
is that RAID 3 employs only parity checking instead of the full hamming code
error detection and correction. It has the advantages of high transfer rate, cost
effective than RAID 2, and data integrity.
Performance and Uses:
RAID 3 is not suitable for small file transfers because the data is distributed and
block-interleaved over multiple drives.
It is cost effective, since it requires one drive for parity checking.
5.RAID Level-4 Sector Interleaving:
Sector interleaving means writing successive sectors of data on different drives. As
in RAID 3, RAID 4 employs multiple data drives and typically a single dedicated
parity drive. Unlike RAID 3, where bits of data are Written to successive disk
drives, an RAID 4, the first sector of a block of data is written to the first drive, the
second sector of data is written to the second drive, and so on. The data is
interleaved at the data level. RAID Leve1-4 offers cost-effective improvement in
performance with data.
RAID Level-5 Block Interleaving:
In RAID Level 5, as in all the other RAID systems, multiple drives are connected
to a disk array controller. The disk array controller contains multiple SCSI
channels. A RAID 5 system can be designed with a single SCSI host adapter with
multiple drives connected to the single SCSI channel. Unlike RAID Level-4, where
the data is sector-interleaved, in RAID Level-5 the data is block-interleaved.
15. (a)(i)Explain the types of Multimedia Authoring Systems.
There are varying degrees of complexity among the authoring systems. For
example, dedicated authoring systems that handle only one kind of an object for a
single user is simple, where as programmable systems are most complex.
Dedicated Authority Systems
Dedicated authoring systems are designed for a single user and generally for
single streams. Designing this type of authoring system is simple, but if it should
be capable of combining even two object streams, it becomes complex. The
authoring is performed on objects captured by the local video camera and image
scanner or an objects stored in some form of multimedia object library. In the case
of dedicated authoring system, users need not to be experts in multimedia or a
professional artist. But the dedicated systems should be designed in such a way
that. It has to provide user interfaces that are extremely intuitive and follow real-
world metaphors. A structured design approach will be useful in isolating the
visual and procedural design components.
TimeLine –based authoring
In a timeline based authoring system, objects are placed along a timeline. The
timeline can be drawn on the screen in a window in a graphic manner, or it created
using a script in a mann.er similar to a project plan. But, the user must specify a
resource object and position it in the timeline. On playback, the object starts
playing at that point in the time Scale. In most timeline based approaches, once the
multimedia object has been captured in a timeline, .it is fixed in location and
cannot be manipulated easily, So, a single timeline causes loss of information
about the relative time lines for each individual object.
Structured Multimedia Authoring
A structured multimedia authoring approach was presented by Hardman. It is an
evolutionary approach based on structured object-level construction of complex
presentations. This approach consists of two stages:
(i) The construction of the structure of a presentation.
(ii) Assignment of detailed timing constraints.
A successful structured authoring system must provide the following capabilities
for navigating through the structure of presentation.
1.Ability to view the complete structure.
2.Maintain a hierarchy of objects.
3.Capability to zoom down to any specific component.
4.View specific components in part or from start to finish.
5.Provide a running status of percentage full of the designated length of the
presentation.
6.Clearly show the timing relations between the various components.
7.Ability to address all multimedia types including text, image, audio, video and
frame based digital
images.
The author must ensure that there is a good fit within each object hierarchy level.
The navigation design of authoring system should allow the author to view the
overall structure while examining a specific object segment more closely.
Programmable Authoring Systems
Early structured authoring tools were not able to allow the authors to express
automatic function for handling certain routine tasks. But, programmable authoring
system bas improved in providing powerful functions based on image processing
and analysis and embedding program interpreters to use image-processing
functions. The capability of this authoring system is enhanced by Building user
programmability in the authoring tool to perform the analysis and to manipulate
the stream based on the analysis results and also manipulate the stream based on
the analysis results. The programmability allows the following tasks through the
program interpreter rather than manually.
Return the time stamp of the next frame. Delete a specified movie segment. Copy
or cut a specified movie segment to the clip board . Replace the current segment
with clip board contents.
Multisource Multi-user Authoring Systems
We can have an object hierarchy in a geographic plane; that is, some objects may
be linked to other objects by position, while others may be independent and fixed
in position". We need object data, and information on composing it. Composing
means locating it in reference to other objects in time as Well as space. Once the
object is rendered (display of multimedia object on the screen) the author can
manipulate it and change its rendering information must be available at the same
time for display.If there are no limits on network bandwidth and server
performance, it would be possible to assemble required components on cue at the
right time to be rendered.
In addition to the multi-user compositing function A multi user authoring system
must provide resource allocation and scheduling of multimedia objects.
Telephone Authoring systems
There is an application where the phone is linking into multimedia electronic mail
application
1.Tele phone can be used as a reading device by providing fill text to-speech
synthesis capability so that a user on the road can have electronic mail messages
read out on the telephone.
2. The phone can be used for voice command input for setting up and managing
voice mail messages. Digitized voice clips are captured via the phone and
embedded in electronic mail messages.
3. As the capability to recognize continuous speech is deploy phones can be used
to create electronic mail messages where the voice is converted to ASCII text on
the fly by high- performance voice recognition engines.
Phones provide a means of using voice where the alternative of text on a screen is
not available. A phone can be used to provide interactive access to electronic mail,
calendar information databases, public information database and news reports,
electronic news papers and a variety of other applications. Integrating of all these
applications in a common authoring tool requires great skill in planning.
The telephone authoring systems support different kinds of applications. Some of
them are: 1.Workstation controls for phone mail.
2.Voice command controls for phone mail.
3.Embedding of phone mail in electric mail.
4.Integration of phone mail and voice messages with electronic mail.
5.Voice Synthesis in integrated voice mail and electronic mail. 6.Local /remote
continuous speech recognition.
(ii) Explain about the Hypermedia Message Components.
A hypermedia message may be a simple message in the form of text with an
embedded graphics, sound track, or video clip, or it may be the result of analysis of
material based books, CD ROMs, and other on-line applications. An authoring
sequence for a message based on such analysis may consist of the following
components.
1. The user may have watched some video presentation on the material and may
want to attach a part of that clip in the message. While watching it, the user marks
possible quotes and saves an annotated copy.
2. Some pages of the book are scanned as images. The images provide an
illustration or a clearer analysis of the topic
3. The user writes the text of the message using a word processor. The text
summarizes the highlights of the analysis and presents conclusions.
These three components must be combined in a message using an authoring tool
provided by the messaging system. The messaging system must prompt the user to
enter the name of the addressee for the message.
The message system looks up the name in an online directory and convert it to an
electronic addresses well as routing information before sending the message. The
user is now ready to compose the message. The first step is to copy the word
processed text report prepared in step 3 above in the body area of the message or
use the text editor provided by the messaging system. The user then marks the
spots where the images are referenced and uses the link and embed facilitites of the
authoring tool to link in references to the images. The user also marks one or more
spots for video clips and again uses the link and embed facilities to add the video
clips to the message.When the message is fully composed, the user signs it
(electronic signature) and mails to the message to the addressee (recipient). The
addressing system must ensure that the images and video clips referenced in the
message are also transferred to a server "local' to the recipient.
Text Messages
In earlier days, messaging systems used a limited subset of plain ASCII text. Later,
messaging systems were designed to allow users to communicate using short
messages. Then, new messaging standards have added on new capabilities to
simple messages. They provide various classes of service and delivery reports.
Typical Electronic mail message
Other capabilities of messaging systems include a name and address directory of
all users accessible to the messaging system.
Rich-Text Messages
Microsoft defined a standard for exporting and importing text data that included
character set, font table, section and paragraph formatting, document formatting,
and color information-called Rich Text Format (RTF), this standard is used for
storage as well as Import and export of text files across a variety of word-
processing and messaging systems. When sections of this document are cut and
pasted into another application, the font and formatting information is .retained.
This allows the target application to display the text in the nearest equivalent fonts
and formats. Rich-text messages based on the RTF formats provide the capability
to create messages in one word processor and edit in another at the recipient end.
Most messaging systems provide rich text capability for the field of a message.
Voice Messages
Voice mail systems answer telephones using recorded messages and direct the
caller through a sequence of touch tone key operations until the caller is connected
to the desired party or is able to leave a recorded message.
Audio' (Music)
The Musical Instrument Digital interface (MIDI) was developed initially by the
music industry to allow computer control of and music recordings from musical
instruments such as digital pianos and electric keyboards. MIDI interfaces are now
being used for a variety of peripherals, including digital pianos, digital organs,
video games with high-fidelity sound output, and business presentations.
Full-Motion Video Management
Use of full-motion video for information repositories and memos are more
informative. More information can be 'conveyed and explained in a short full-
motion video clip than can be conveyed In a long text document. Because a picture
is equivalent to thousand words.
Full Motion video Authoring System
An authoring system is an important component of a multimedia messaging
system. A good authoring system must provide a number of tools for the creation
and editing of multimedia objects. The subset of tools that are necessary are listed
below:
1. A video capture program - to allow fast and simple capture of digital video from
analog
sources such as a video camera or a video tape. .
2. Compression and decompression Interfaces for compressing the captured video
as it is being captured.
3. A video editor with the ability to decompress, combine, edit, and compress
digital video clips.
4. Video indexing and annotating software for marking sections of a videoclip and
recording annotations.
Identifying and indexing video clips for storage. Full-Motion Video Playback
Systems The playback system allows the recipient to detach the embedded vIdeo
reference object, Interpret its contents and retrieve the actual video clip from a
specialized video server and launch the Playback application. A number of factors
are involved in playing back the video correctly. They are:
1.How the compression format used for the storage of the video clip relates to the
available hardware and software facilities for decompression.
2.Resolution of the screen and the system facilites available for managing display
windows. The display resolution may be higher or lower than the resolution of the
source of the video clip.
3.The CPU processing power and the expected level of degradation as well as
managing the degraded output on the fly.
4.Ability to determine hardware and software facilities of the recipient's system,
and adjusting playback, parameters to provide the best resolution and perfonnance
on playback. The three main technologies for playing full motion video are
microsoft's video for windows: Apple's Quicktime, and Intel's Indeo.
Video for Windows (VFW): It is the most common environment for multimedia
messaging. VFW provides capture, edit, and playback tools for full-motion video.
The tools provided by VFW are: The VidCap tool, designed for fast digital video
capture. The VidEdit tool designed for decompression, edition, and compressing
full-motion digital video. The VFW playback tool. The VFW architecture uses
OLE. With the development of DDE and OLE, Microsoft introduced in windows
the capability to link or multimedia objects in a standardized manner. Hence
variety :;windows based applications can interact with them. We can add full-
motion video to any windows-based application with the help of VFW. The VFW
playback tool is designed to use a number of codecs (software encoder/decoders)
for decompressing and playing video files. The default is for A VI files.
OR
(b)(i) Explain about the components of a distributed multimedia system.
A typical multimedia application environment consists of the following
components:
1. Application software.
2. Container object store.
3. Image and still video store.
4. Audio and video component store.
5. Object directory service agent.
6. component service agent.
7. User interface and service agent.
8. Networks (LAN and WAN).
Application Software The application software perfom1s a number of tasks
related to a specific business process. A business process consists of a series of
actions that may be performed by one or more users. The basic tasks combined to
form an application include the following:
(1) Object Selection - The user selects a database record or a hypermedia
document from a file system, database management system, or document server.
(2) Object Retrieval- The application retrieves the base object.
(3) Object Component Display - Some document components are displayed
automatically when the user moves the pointer to the field or button associated
with the multimedia object.
(4) User Initiated Display - Some document components require user action before
playback/display.
(5) Object Display Management and Editing: Component selection may invoke a
component control subapplication which allows a user to control playback or edit
the component object.
Document store
A document store is necessary for application that requires storage of large volume
of documents. The following describes some characteristics of document stores.
1. Primary Document Storage: A file systems or database that contains primary
document objects (container objects). Other attached or embedded documents and
multimedia objects may be stored in the document server along with the container
object.
2. Linked Object Storage: Embedded components, such as text and formatting
information, and linked information, and linked components, such as pointers to
image, audio, and video. Components contained in a document, may be stored on
separate servers.
3. Linked Object Management: Link information contains the name of the
component, service class or type, general attributes such as size, duration of play
for isochronous objects and hardware, and software requirements for rendering.
Image and still video store
An image and still video is a database system optimized for storage of images.
Most systems employ optical disk libraries. Optical disk libraries consist of
multiple optical disk platters that are played back by automatically loading the
appropriate platter in the drive under device driver control. The characteristics of
image and still video stores are as follows: (i) Compressed information (ii) Multi-
image documents (iii) Related annotations (iv) Large volumes (v) Migration
between high-volume such as an optical disk library and high-speed media
such as magnetic cache storages(vi) Shared access: The server software managing
the server has to be able to manage the different requirements.
Audio and video Full motion video store
Audio and Video objects are isochronous. The following lists some characteristIcs
of audio and full-motion video object stores:
• Large-capacity file system: A compressed video object can be as large as six
to ten M bytes for one minute of video playback.
• Temporary or permanent Storage: Video objects may be stored temporarily
on client workstations, servers Providing disk caches, and multiple audio or
video object servers.
• Migration to high volume/lower-cost media: migration and management of
online storage are much of greater importance and more complex than of
images.
• Playback isochronocity: Playing back a video object requires consistent
speed without breaks. Multiple shared access objects being played back in a
stream mode must be accessible by other users.
Object Directory Service Agent
The directory service agent is a distributed service that provide a directory of all
multimedia objects on the server tracked by that element of the directory service
agent. The following describes various services provided by a directory service
Agent.
(1) Directory Service: It lists all multimedia objects by class and server location.
(2) Object Assignment: The directory service agent assigns unique identification
to each multimedia object.
(3) Object Status Management: The directory service must track the current usage
status of each object.
(4) Directory Service Domains: The directory service should be modular to allow
setting up Directory Service Server Elements: Each multimedia object server must
have directory service element that reside on either server or some other resources.
(5) Network Access: The directory service agent must be accessible from any
workstation on the network.
(ii) Explain about key issues of data organization for multimedia
systems.
Data Independence:
Flexible access to a variety of distributed databases for one or more applications
requires that the data be independent from the application so that future
applications can access the data without constraints related to a previous
application. Important features of data independent design are:
1. Storage design is independent of specific applications.
2. Explicit data definitions are independent of application programs.
3. Users need not know data formats or physical storage structures .
4. Integrity assurance is independent of application programs.
5. Recovery is independent of application programs .
Common Distributed Database Architecture:
Employment of Common Distributed database architecture is presented by the
insulation of data from an application and distributed application access. Key
features of this architecture are:
1.The ability for multiple independent data structures to co-exist in the system
(multiple server classes). 2.Uniform distributed access by clients.
3.Single point for recovery of each database erver.
4.Convenient data re-organization to suit requirements
5.Tunability and creation of object classes.
6.Expandibility.
Multiple Data Servers:
A database server is a dedicated resource on a network accessible to a number of
applications, When a large number of users need to access the same resources,
problem arises
This problem is solved by setting up multiple data servers that have copies of the
same resources.
**********

You might also like