Module 4
Module 4
Environment )
builds a distributed computing environment
on top of existing operating systems.
it was apparent that many the OSF consortium's
customers wanted to build distributed applications on
top of OSF/1 and other UNIX systems.
OSF responded to this need by issuing a "Request for
Technology" in which they asked companies to supply
tools and other software needed to put together a
distributed system.
Many companies made bids, which were carefully
evaluated.
OSF then selected a number of these offerings, and
developed them further to produce a single integrated
package — DCE — that could run on OSF/1 and also on
other systems.
provide a coherent, seamless environment that
can serve as a platform for running distributed
applications which is built on top of existing
operating systems.
Scheduling algorithm
1) FIFO - searches the priority queues from highest
to lowest.
the selected thread can run as long as it needs to.
When it has finished, it is removed from the queue of
runnable threads. Then the scheduler once again
searches the queues from high to low and takes the
first thread it finds.
Scheduling algorithm
Round robin
the scheduler locates the highest populated queue
and runs each thread for a fixed quantum.
database of (server,
endpoint) entries
The client stub marshals the parameters and passes the resulting buffer (possibly in chunks) to the runtime library for transmission
using the protocol chosen at binding time.
When a message arrives at the server side, it is routed to the correct server based on the endpoint contained in the incoming
message.
The runtime library passes the message to the server stub, which unmarshals the parameters and calls the server. The reply goes
back by the reverse route.
RPC semantics
The default is at-most-once operation
no call is ever carried out more than once, even in the face of system crashes. what this means
is that if a server crashes during an RPC and then recovers quickly, the client does not repeat
the operation, for fear that it might already have been carried out once.
Alternatively, it is possible to mark a remote procedure as idempotent (in the IDL file), in which
case it can be repeated multiple times without harm.
experiment succeeds or fails with the ability to synchronize remote clocks accurately.
DCE has a service called DTS (Distributed Time Service), whose goal is to keep clocks on
separate machines synchronized.
When the time clerk calculates that the possible error has passed the bound of what is allowed, it resynchronizes.
After resynchronizing, a clerk has a new UTC that is usually either ahead or behind its current one. It could just set the clock to the new value.
Even if the clock has to be set forward, it is better not to do it abruptly because some programs display information and then give the user a certain
number of seconds to react. DTS can make the correction gradually.
Computation of the new UTC from four time sources.
Time servers
Global - keep local time synchronized
servers in different cells
directory service keeps track of where all resources
are located and to provide people friendly names for
them.
The second one, the CDS clerk, is a daemon process that runs on the client machine. Its primary function is to do client caching.
Two examples of a client looking up a name.
X.500 uses an object-oriented information model. Every item stored in an X.500 directory is an object.
Every object has one or more attributes. An attribute consists of a type and a value(C=US to indicate that the type is country and the value is United States)
Example
the usual way for programs to access the GDS system is via the XDS (X/Open Directory Server) library.
When a call is made to one of the XDS procedures, it checks to see if the entry being manipulated is a CDS entry or a GDS entry. If it is GDS, it makes the necessary XOM calls to get the job done.
The XDS interface contains only 13 calls, Of these, five set up and initialize the connection between the client and the directory server.
The eight calls that actually use directory objects are listed as follows
X.500 names are handled by GDS; DNS or mixed names are handled by CDS, as illustrated
In DCE, a principal is a user or process that needs to communicate securely. Human beings, DCE servers (such as CDS), and application servers (such as the software in a automated teller machine in a banking system) can all be principals.
Each principal has a UUID (Unique User IDentifier), which is a binary number associated with it and no other principal.
Authentication is the process of determining if a principal really is who he/she/it claims to be.
Authorization - Once a user has been authenticated, the question of which resources that user may access, and how, comes up. This issue is called authorization. In DCE, authorization is handled by associating an ACL (Access Control List) with each resource.
Protection in DCE is closely tied to the cell structure. Each cell has one security service of which the authentication server is part, maintains keys, passwords and other security-related information in a secure data base called the registry that the local principals
have to trust.
A client sending an encrypted message to a server.
As a consequence of this hostile environment, the following requirements were placed on the design of DCE security model from its inception.
1) at no time may user passwords appear in plaintext (i.e., unencrypted) on the network or be stored on normal servers. This requirement precludes doing authentication just by sending user passwords to an authentication server for approval.
2) user passwords may not even be stored on client machines for more than a few microseconds.
3) authentication must work both ways. That is, not only must the server be convinced who the client is, but the client must also be convinced who the server is. This requirement is necessary to prevent an intruder.
4) the system must have firewalls built into it. If a key is somehow compromised (disclosed), the damage done must be limited. This requirement can be met by creating temporary keys for specific purposes and with short lifetimes, and using these for most work.
Major components of the DCE security system for a single cell.
three of the below servers run on the security server machine.
Registry server - manages the security data base, the registry, which contains the names of all the principals, groups, and organizations.
Authentication server(ticket granting server) - is used when a user logs in or a server is booted. It verifies the claimed identity of the principal and issues a kind of ticket that allows the principal to do subsequent authentication without having to use the password again.
Privilege server issues documents called PACs (Privilege Attribute Certificates) to authenticated users. PACs are encrypted messages that contain the principal's identity, group membership, and organizational membership in such a way that servers are instantly
convinced without need for presenting any additional information.
Login facility - program that asks users their names and passwords during the login sequence. It uses the authentication and privilege servers to get the user logged in and to collect the necessary tickets and PACs for them.
Once a user is logged in, he can start a client process that can communicate securely with a server process using authenticated RPC.
When an authenticated RPC request comes in, the server uses the PAC to determine the user's identity, and then checks its ACL to see if the requested access is permitted.
Each server has its own ACL manager for guarding its own objects. Users can be added or removed from an ACL, permissions granted or removed, and so on, using an ACL editor program.
Each user has a secret key known only to himself and to the registry.
It is computed by passing the user's password through a one-way (i.e., noninvertible) function.
Servers also have secret keys. To enhance their security, these keys are used only briefly, when a user logs in or a server is booted. After that authentication is done with tickets and PACs.
A ticket is an encrypted data structure issued by the authentication server or ticket-granting server to prove to a specific server that the bearer is a client with a specific identity.
ticket = S, {session-key, client, expiration-time, message-id} Ks
where S - server for whom the ticket is intended.
The information within curly braces is encrypted using the server's private key, KS.
When the server decrypts the ticket with its private key, it obtains the session key to use when talking to the client.
SR NO TICKET PAC
establish the identity of
give the numerical values of
1 the sender.
user-id and group-ids that a
particular principal is
associated with.
generated by the
generated by the privilege
2. authentication server or
server
ticket-granting server.
ACLs are managed by ACL managers, which are library procedures incorporated into every server.
When a request comes in to the server that controls the ACL, it decrypts the client's PAC to see what his ID and groups are, and based on these, the ACL, and the operation desired, the ACL manager is called to make a decision about whether to grant or deny access.
This ACL manager divides the resources into two categories: simple resources, such as files and data base entries, and containers, such as directories and data base tables, that hold simple resources.
It distinguishes between users who live in the cell and foreign users, and for each category further subdivides them as owner, group, and other.
Seven standard rights are supported: read, write, execute, change-ACL, container-insert, container-delete, and test.
The change-ACL right grants permission to modify the ACL itself. The two container rights are useful to control who
may add or delete files from a directory
A sample ACL.
Amoeba originated at the Vrije Universiteit, Amsterdam, The
Netherlands in 1981 as a research project in distributed and
parallel computing. It was designed primarily by Andrew S.
Tanenbaum and three of his Ph.D. students. By 1983, an initial
prototype, Amoeba 1.0, was operational.
Starting in 1984, a second group was set up. This work used
Amoeba 3.0, which was based on RPC. Using Amoeba 3.0, it
was possible for clients in Tromso to access servers in
Amsterdam transparently, and vice versa.
The primary goal of the project was to
build a transparent distributed operating
system.
An important distinction between Amoeba
and most other distributed systems is that
Amoeba has no concept of a “home
machine”.
A secondary goal of Amoeba is to provide
a testbed for doing distributed and
parallel programming.
Processor pool File
server Print server
X-terminals
All the computing power is located in one or
more processor pools which consists of a
substantial number of CPUs, each with its own
local memory and network connection.
The CPUs in a pool can be of different
architectures.
Bits 48 24 8 48
Getparams Get parameters associated with the server: allow the system administrator to read and write
parameters that control server operation. For example, the algorithm used to choose processors
can be selected using this mechanism.
Info Get an ASCII string briefly describing the object
Touch Pretend the object was just used: tells the server that the object touched is still in used.
A process is an object in Amoeba.
D
S S T
T
Amoeba supports two forms of
communication: RPC, using point-to-point
message passing and group communication.
RPC Primitives:
1. get_request(&header, buffer, bytes) –
indicates a server’s willingness to listen on
a port.
A A A
Broadcast network
Sequencer machine
A
M25 B
A A Last=24
A M25
M M25
M
Last = 24 Last=24 history S
M25
M25
Request
for 24 A
C
Last=23 M25
buffered
The sender sends a message to the sequencer and
starts a timer:
(a) the broadcast comes back before the timer runs
out. (normal case).the sender just stops the timer.
(b) the broadcast has not come back before the
timer expires. (either the message or the broadcast
has been lost).the sender retransmits the message.
if the original message is lost, no harm is done.
if the sender missed the broadcast, the sequencer
will detect the retransmission as a duplicate and
tell the sender everything is all right.
(c ) The broadcast comes back before the
timer expires, but it is the wrong
broadcast. This occurs when two processes
attempt to broadcast simultaneously.
If message A gets to the sequencer first,
and is broadcast. A sees the broadcast and
unblocks its application program.
However, B sees A’s broadcast and realizes
it has failed to go first. B will accept A’s
broadcast and wait.
If a Request for Broadcast arrives:
(a) check to see if the message is a
retransmission. If so, inform the sender that
the broadcast has been done.
1 2 1 S 2 B
A S B A
2 2 2 2
1 1
(a) 40 43 41 44 40 X
0 1 2 3 4 5
(b) 40 43 41 44 40 X
0 1 2 3 4 5
new sequencer
(c ) 44 44 44 44 44 X
0 1 2 3 4
RPC Group
FLIP layer
A
Create Create a new file; optionally commit
it as well
Read Read all or part of a specified file
Size Return the size of a specified file
Modify Overwrite n bytes of an uncommitted
file
Insert Insert or append n bytes to an
uncommitted file
Delete Delete n bytes from an uncommitted
file
Create Create a new directory
Delete Delete a directory or an entry in a directory
Append Add a new directory entry to a specified
directory
Replace Replace a single directory entry
Lookup Return the capability set corresponding to a
specified name
Getmasks Return the rights masks for the specified entry
Chmod Change the rights bits in an existing directory
entry
Objects managed by the directory server can be
replicated automatically by using the replication
server. It practices what is called lazy
replication.