THOUSANDS OF FREE BLOGGER TEMPLATES

Thursday, July 30, 2009

Thread

  • A thread (or lightweight process) is a basic unit of CPU utilization; it consists of:
o Thread ID
o program counter
o register set
o stack space
  • A thread shares with its peer threads its:
o code section
o data section
o operating-system resources
  • A traditional or heavyweight process has a single thread of control.
  • In a multiple threaded task, while one server thread is blocked and waiting, a
second thread in the same task can run.
o Cooperation of multiple threads in same job confers higher throughput
and improved performance.
o Applications that require sharing a common buffer (i.e., producer-
consumer) benefit from thread utilization.
  • Threads provide a mechanism that allows sequential processes to make
blocking system calls while also achieving parallelism.

Single Threaded Process




Multi-Threaded Process

Benefits of Multi Threaded Programming
  • Responsiveness
  • Resource Sharing
  • Economy
  • Utilization of MP Architectures


User Thread
  • Thread management done by user-level threads library
  • Fast to create and manage threads
  • If the kernel is single-threaded, then any user-level thread per
blocking system call will cause the entire process to block
  • Examples
- POSIX Pthreads
- Mach C-threads
- Solaris UI-threads

Kernel Thread
  • Supported by the Kernel
  • Slower to create and manage threads than are user threads
  • If a thread performs a blocking system call, then the kernel can schedule
another thread in the application for execution.
  • Multiple threads are able to run in parallel on multiprocessors.
  • Examples
- Windows NT/2000
- Solaris 2
- Tru64 UNIX
- BeOS
- Linux

Thread Library
  • A thread library provides the programmer with an API for creating and managing threads
  • Three main libraries:

-POSIX Pthreads

-Win32

-Java


Multithreading Models
  • Many-to-One Model
  • One-to-One Model
  • Many-to-Many Model


Many-to-One Model

  • Many user-level threads mapped to single kernel thread
  • Used on systems that do not support kernel threads.
  • Allows many user level threads to be mapped to many kernel thread
  • Allows the operating system to create a sufficient number of kernel threads
  • Solaris prior to version 9
  • Windows NT/2000 with the ThreadFiber package



Slide 17
n


One-to-One Model
  • Each user-level thread maps to kernel thread.
  • Creating a user thread requires creating the corresponding kernel thread.
  • Windows NT/2000, OS/2
  • Each user-level thread maps to kernel thread
  • Examples

    -Windows NT/XP/2000

    -Linux

    -Solaris 9 and later




Slide 5

Thursday, July 16, 2009

Inter Process Communication

  • Mechanism for processes to communicate and to synchronize their actions
  • Message system – processes communicate with each other without resorting to shared variables
  • IPC facility provides two operations:
    -send(message) – message size fixed or variable
    -receive(message)
  • If P and Q wish to communicate, they need to:
    -establish a communication link between them
    -exchange messages via send/receive
  • Implementation of communication link
    -physical (e.g., shared memory, hardware bus)
    -logical (e.g., logical properties)

Direct Communication

  • Processes must name each other explicitly:
    -send (P, message) – send a message to process P
    -receive(Q, message) – receive a message from process Q
  • Properties of communication link
    -Links are established automatically.
    - A link is associated with exactly one pair of communicating
    processes.
    - Between each pair there exists exactly one link.
    - The link may be unidirectional, but is usually bi-directional.
  • Asymmetric variant
    - receive(id, message) – receive a message from any
    process, pid stored in id

Indirect Communication

  • Messages are directed and received from mailboxes (also referred to as ports).
    - Each mailbox has a unique id.
    -Processes can communicate only if they share a mailbox.
  • Properties of communication link
    - Link established only if processes share a common mailbox
    - A link may be associated with many processes.
    - Each pair of processes may share several communication links.

-Link may be unidirectional or bi-directional.

  • Operations
    -create a new mailbox
    - send and receive messages through mailbox
    -destroy a mailbox
  • Primitives are defined as:
    send(A, message) – send a message to mailbox A

receive(A, message) – receive a message from mailbox A

  • Mailbox sharing

-P1, P2, and P3 share mailbox A.

-P1, sends; P2 and P3 receive.

- Who gets the message?

  • Solutions

-Allow a link to be associated with at most two processes.

-Allow only one process at a time to execute a receive operation.

-Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.

Synchronization

  • Message passing may be either blocking or non-blocking.
  • Blocking is considered synchronous
  • Non-blocking is considered asynchronous
  • send and receive primitives may be either blocking or non-blocking.


Cooperating Process

  • Independent process cannot affect or be affected by the execution of another process
  • Cooperating process can affect or be affected by the execution of another process
  • Advantages of process cooperation
  • Information sharing
  • Computation speed-up
  • Modularity
  • Convenience

Operations On Process

Process Creation

  • Parent process creates children processes, which, in turn create other processes, forming a tree of processes.
  • Resource sharing
    -Parent and children share all resources.
    -Children share subset of parent’s resources.
    -Parent and child share no resources.
  • Execution
    -Parent and children execute concurrently.
    -Parent waits until children terminate.
    -Address space
    -Child duplicate of parent.
    -Child has a program loaded into it.
  • UNIX examples
    -fork system call creates new process
    -fork returns 0 to child , process id of child for parent
    -exec system call used after a fork to replace the process’ memory space with a new program.

Process Termination

  • Process executes last statement and asks the operating system to delete it (exit)
    -Output data from child to parent (via wait)
    -Process’ resources are deallocated by operating system
  • Parent may terminate execution of children processes (abort)
    -Child has exceeded allocated resources
    -Task assigned to child is no longer required
  • If parent is exiting
    -Some operating system do not allow child to continue if its parent terminates
    -All children terminated - cascading termination

Process Scheduling

REPRESENTATION OF PROCESS SCHEDULING



Scheduling Queues

  • Job queue – set of all processes in the system.
  • Ready queue – set of all processes residing in main memory, ready and waiting to execute.
  • Device queues – set of processes waiting for an I/O device.
  • Processes migrate between the various queues.


Schedulers

  • Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue.
  • Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU.
  • Short-term scheduler is invoked very frequently (milliseconds) fi (must be fast).
  • Long-term scheduler is invoked very infrequently (seconds, minutes) fi (may be slow).
  • The long-term scheduler controls the degree of multiprogramming.
  • Processes can be described as either:
    -I/O-bound process – spends more time doing I/O than computations, man many short CPU bursts.
    -CPU-bound process – spends more time doing computations; few very long CPU bursts.


Context Switch

  • When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process.
  • Context-switch time is overhead; the system does no useful work while switching.
  • Time dependent on hardware support.

The concept of process

  • An operating system executes a variety of programs:F Batch system – jobsF Time- shared systems – user programs or tasks
  • Textbook uses the terms job and process almost interchangeably.
  • Process – a program in execution; process execution must progress in sequential fashion.
    A process includes:
    -program counter
    - stack
    -data section

Process State

  • As a process executes, it changes state
    1. new: The process is being created.
    2. running: Instructions are being executed.
    3. waiting: The process is waiting for some event to occur.
    4. ready: The process is waiting to be assigned to a process.
    5. terminated: The process has finished execution
Process State Diagram
















Process Control Block
  • Information associated with each process.
  • Process ID
  • Process state
  • Program counter
  • CPU registers
  • CPU scheduling information
  • Memory-management information
  • Accounting information
  • I/O status information


Process Control Block








Threads
  • One view of a thread is as an independent program counteroperating within a process
  • A thread is sometimes called a lightweight process (LWP), butis a smaller execution that a process
  • A thread consists of:

-a thread execution state (Running, Ready, etc.)

-a context (program counter, register set)

-an execution stack
-some per-thread static storage for local variables
-access to the memory and resources of its process (shared withall other threads in that process)
-OS resources (open file, signals, etc.)

Tuesday, July 7, 2009

System Boot

  • Operating system must be made available to hardware so hardware can start it
    -Small piece of code – bootstrap loader, locates the kernel, loads it into memory, and starts it
    -Sometimes two-step process where boot block at fixed location loads bootstrap loader
    -When power initialized on system, execution starts at a fixed memory location
    -Firmware used to hold initial boot code

System Generation (SYSGEN)

  • Operating systems are designed to run on any of a class of machines; the system must be configured for each specific computer site.
  • SYSGEN program obtains information concerning the specific configuration of the hardware system.
  • Booting – starting a computer by loading the kernel.
  • Bootstrap program – code stored in ROM that is able to locate the kernel, load it into memory, and start its execution.

Virtual Machine

-Virtual Machine



  • A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the operating system kernel as though they were all hardware.

  • A virtual machine provides an interface identical to the underlying bare hardware.

  • The operating system creates the illusion of multiple processes, each executing on its own processor with its own (virtual) memory.

  • The resources of the physical computer are shared to create the virtual machines.
    -CPU scheduling can create the appearance that users have their own processor.
    -Spooling and a file system can provide virtual card readers and virtual line printers.
    -A normal user time-sharing terminal serves as the virtual machine operator’s console.



  • Implemetation

-Traditionally written in assembly language, operating systems can now be written in higher-level languages.
-Code written in a high-level language:
-can be written faster.
-is more compact.
-is easier to understand and debug.
-An operating system is far easier to port (move to some other hardware) if it is written in a high-level language.







  • Benefits

A virtual machine, simply put, is a virtual computer running on a physical computer. The virtual machine emulates a physical machine in software. This includes not only the processor but the instruction set, the memory bus, any BIOS commands and critical machine hardware such as the system clock and and DMA hardware. Depending upon the machine peripheral devices are generally virtualized including storage devices like floppy drives, hard drives and CD drives. Video, keyboard and mouse support are also common. A virtual machine must look and act just like the real thing so standard software, like operating systems and applications, can run without modification




  • Examples


1. Java Virtual Machine


  • Compiled Java programs are platform-neutral bytecodes executed by a Java Virtual Machine (JVM).

  • JVM consists of
    - class loader
    - class verifier
    - runtime interpreter

  • Just-In-Time (JIT) compilers increase performance


Sunday, July 5, 2009

System Structure

Simple Structure
-Simple structure systems do not have well-defined structures
-The Unix only had limited structure: kernel and system programs
-Everything between the system call interface andphysical hardware is the kernel.


Layered Approach
-The operating system is broken up into a number of layers (or levels), each on top of lower layers.
-Each layer is an implementation of an abstract object that is the encapsulation of data and
operations that can manipulate these data.
-The bottom layer (layer 0) is the hardware.
-The main advantage of layered approach is
modularity.
-The lowest layer is process management.
-Each layer only uses the operations provided by lower layers and does not have to know their implementation.
-Each layer hides the existence of certain data structures, operations and hardware from higher-level layers.

Friday, July 3, 2009

System Calls

SYSTEM CALLS
-System calls provide an interface to the services made available by an operating system.

Process Control
A process is basically a single running program. It may be a ``system'' program (e.g login, update, csh) or program initiated by the user (textedit, dbxtool or a user written one).

When UNIX runs a process it gives each process a unique number - a process ID, pid.

The UNIX command ps will list all current processes running on your machine and will list the pid.

The C function int getpid() will return the pid of process that called this function.

A program usually runs as a single process. However later we will see how we can make programs run as several separate communicating processes.


File Management

One of the most common features for web applications I’ve built over the last 4 years doing rails is file management. Users download file attachments in almost every web application I use. Thankfully rails has a really capable suite of file management tools, and there are several great plugins to handle some of the more mundane functionality you’d need.

Over the next month or so I’m going to cover the set of techniques I use when building file management solutions for my clients, and some really exciting up and coming solutions which solve the last of my annoyances.

The rough order of business will be:

  1. File Downloads Done Right
  2. File Management Plugins
  3. Painless File Uploads
  4. Storing your Files

Information Maintenance
The CERN proxy server is a concurrent server. Thus it forks a child process for every request received and thereafter it is the child process that is responsible for handling the request. The child process connects to the server, fetches the document, communicate this to the client and dies.

In the modified proxy server, the child process also extracts any replication information present in the response from the server. Since all the child processes run in different address spaces, the replication information available to one child will not be visible to other child processes. In our design this replication information is maintained by the parent process itself in order to make sure that all the child processes have access to it. However, this approach requires some communication, to transfer the information available at a child process, to the parent process.

For implementing this communication, the parent process in the modified proxy server opens a pipe at the startup time. Thereafter, whenever a new child process is created to handle a request, it inherits this pipe from the parent process. The child process also receives the replication information available in the parent process at the time of creation. It uses this information to possibly redirect the request to a replica server. Further, if the child process obtains any new replication information in response to the request made, it communicates this information back to the parent process using the inherited pipe. The parent process collects this information and updates its database accordingly.

Operating Services

Operating System Services
Operating systems are responsible for providing essential services within a computer system:

  • Initial loading of programs and transfer of programs between secondary storage and main memory
  • Supervision of the input/output devices
  • File management
  • Protection facilitie

Operating System Services

Exchange Server 2003 relies heavily on the operating system for network communication, security, directory services, and so forth. For example, Exchange Server 2003 requires TCP/IP and depends on the TCP/IP protocol stack and related components. These components are implemented in kernel drivers deeply integrated into the Windows I/O subsystem. Exchange Server 2003 uses standard Microsoft Win32 programming interfaces to interact with the kernel.

In addition to the Windows kernel, Exchange Server 2003 has the following Windows services dependencies:

  • Event Log This service enables event log messages issued by Exchange services and other Windows-based programs and components to be viewed in Event Viewer. This service cannot be stopped.
  • NTLM Security Support Provider This service provides security for programs that use remote procedure calls (RPCs) and transports other than named pipes to log on to the network using the NTLM authentication protocol.
  • Remote Procedure Call (RPC) This service enables the RPC endpoint mapper to support RPC connections to the server. This service also serves as the Component Object Model (COM).
    RPCs and lightweight remote procedure calls (LRPCs) are important inter-process communication mechanisms. LRPCs are local versions of RPCs. LRPCs are used between the Exchange store and those server components that depend on MAPI and related APIs for communication, such as messaging connectors to non-Exchange messaging systems. Regular RPCs, however, are used when clients must communicate with server services over the network. Typical RPC clients are MAPI clients, such as Microsoft Outlook and Exchange System Manager, but internal components of System Attendant, such as DSProxy, are also RPC clients. To accept directory requests from MAPI clients and pass them to an address book provider, DSProxy must use RPCs to communicate with Active Directory through the name service provider interface (NSPI). For more information about DSProxy, see Exchange Server 2003 and Active Directory.
    It is important to understand that RPCs are an application-layer communication mechanism, which means that RPCs use other network communication mechanisms, such as NetBIOS, named pipes, or Windows Sockets, to establish the communication path. To create a connection, the RPC endpoint mapper is required, because the client must first determine the endpoint that should be used. RPC server services use dynamic connection endpoints, by default. In a TCP/IP network, the client connects to the RPC endpoint mapper through TCP port 135, queries for the current TCP port of the desired service (for example, the Name Service Provider Interface (NSPI) port of Active Directory), obtains this TCP port from the endpoint mapper, and then uses this TCP port to establish the RPC connection to the actual RPC server. The following figure illustrates the role of the RPC endpoint mapper.
    Establishing an RPC connection to Active Directory
    Bb124202.7f54016b-8749-4bb6-98f7-80bf718a04c6(en-us,EXCHG.65).gif
    Bb124202.note(en-us,EXCHG.65).gifNote:
    By default, Exchange services use dynamic TCP ports between 1024 and 5000 for RPC communication. For various services, such as System Attendant and Exchange Information Store service, it is possible to specify static ports using registry parameters. However, the client must contact the RPC endpoint mapper whether the port assignment is dynamic or static.
  • Server This service enables file and printer sharing and named pipe access to the server through the server message block (SMB) protocol. For example, if you access message tracking log files using the message tracking center in Exchange System Manager, you communicate with the server service because message tracking logs are shared for network access through \\\.Log, such as \\Server01\Server01.log. The SMB protocol is also required for remote Windows administration.
    The SCM key for the server service is lanmanserver. Underneath this registry key, you can find an important subkey called Shares. This key contains parameters for all shares on the server. One share that is particularly important for System Attendant is Address, which provides access to the proxy address generation DLLs that the Recipient Update Service uses to assign mailbox-enabled and mail-enabled recipient objects, X.400, SMTP, Lotus Notes, Microsoft Mail, Novell GroupWise, and Lotus cc:Mail addresses according to the settings in recipient policies. Address generation DLLs are accessed over the network, because gateway connectors (and their address generation DLLs) do not necessarily exist on the local server running Exchange Server. Recipient Update Service is part of System Attendant, so the server service must be started before System Attendant can start.
  • Workstation This service is the counterpart to the server service. It enables the computer to connect to other computers on the network based on the SMB protocol. This service must be started before System Attendant will start.
  • Security Accounts Manager The Security Accounts Manager (SAM) service stores security information for local user accounts and is required for local accounts to log on to the server. Because all Exchange services must log on to the local computer using the LocalSystem account, Exchange Server 2003 cannot function if this component is unavailable.
  • Windows Management Instrumentation This service provides a standard interface and object model for accessing management information about the computer hardware and software. System Attendant is the component in Exchange Server 2003 that is responsible for server monitoring and status. Exchange Server 2003 adds additional Windows Management Instrumentation (WMI) providers to the WMI service, so that you can access Exchange status information through WMI. The WMI service is required for the Microsoft Exchange Management service to start.

In addition, there are also several Windows services that Exchange Server 2003 requires in specific situations:

  • COM+ Event System This service supports system event notification for COM+ components, which provide automatic distribution of events to subscribing COM components. You should not disable this service on servers running Exchange Server 2003, because event sinks wrapped in a COM+ component application that run out-of-process on the server will not function properly.
  • COM+ System Application This service manages the configuration and tracking of COM+-based components. If the service is stopped, most COM+-based components in Exchange Server 2003 will not function properly.
  • Error Reporting Service This is an optional service that enables automatic reporting of errors. Servers running Exchange Server can use this service to automatically send fatal service error information to Microsoft.
  • HTTP SSL This service implements the secure HTTP (HTTPS) for the HTTP service, using Secure Socket Layer (SSL). If you want to use HTTPS to secure Outlook Web Access or RPC over HTTP connections, you must enable this service.
  • IPSec Services This service enables Internet Protocol security (IPSec), which provides end-to-end security between clients and servers on TCP/IP networks. This service must be enabled if you want to use IPSec to secure network traffic between a server running Exchange Server and other computers on the network, such as a front-end server running Exchange Server or domain controller.
  • Microsoft Search This service enables the indexing of information stored on the server. This service is required if you want to enable full-text indexing on a mailbox or public folder store on the server running Exchange Server.
  • Microsoft Software Shadow Copy Provider This service manages software-based volume shadow copies taken by the Microsoft Volume Shadow Copy service. If you are using the Windows Backup tool to backup Exchange Server 2003 messaging databases, you can stop this service, because the Windows Backup tool does not rely on the Volume Shadow Copy service. If you are using a non-Microsoft backup solution, on the other hand, which does use the Volume Shadow Copy service, you must enable this service. In general, this service should have the same startup type as the Volume Shadow Copy service.
  • Net Logon This service enables a secure channel between the server running Exchange Server and a domain controller. This service is required for users to access mailboxes on the server running Exchange Server and for any service that is using a domain account to start.
  • Performance Logs and Alerts This service collects performance data from local or remote computers based on preconfigured schedule parameters, and then writes the data to a log or triggers an alert. If you stop this service, you cannot collect performance information using the Performance Logs and Alerts snap-in, which is available in the Performance tool.
  • Remote Registry This service enables users to modify registry settings remotely. Exchange System Manager requires access to the registry, for example, if you want to configure diagnostics logging for Exchange services. This service must be available if you run Exchange System Manager on a management workstation. If this service is stopped, the registry can only be modified on the local server.
  • System Event Notification This service monitors system events and notifies subscribers to COM+ Event System of these events. If this service is stopped, COM+ Event System subscribers do not receive Exchange system event notifications. The following table lists the system events provided by Exchange Server 2003.

    System events in Exchange Server 2003



Operating System Structures

*System Components

-Operating System Components
A process in operating system terminology is 'a program in execution'. Unlike a program which is resident on secondary storage, the process is an active entity. The process utilizes the resources of the system. A process is created whenever a program is executed.

Modern computer systems allow multiple processes to be loaded into memory at the same time and, through time-sharing (or multitasking), give an appearance that they are being executed at the same time even if there is just one processor.

The Process

Program in execution

Process state (see text diagram)

Process control block

Machine dependent: context

Machine independent: state, scheduling info, memory mgmt, accounting, open files, I/O

Scheduling

Short-term queues: ready, I/O, sleep

Degree of multi-programming: number of processes

Creation rate vs. exit rate

CPU vs. I/O bound processes

Swapping to maintain a balance

Operations

Creation

Parent/child relationship

Execution: waits for children or continues

Address space: duplicate of parent or new program

Termination

Voluntary: exit()

Involuntary: abort(), kill()

Cooperating vs. independent

Independent: no sharing

Dependent: Sharing, producer/consumer

Interprocess Communication

Message passing vs. shared memory

Blocking vs. non-blocking, send & receive

Direct: point-to-point, simplex or duplex, explicit naming of target

Indirect: send through mailbox, many-to-many, simplex or duplex, name associated with mailbox

Buffering: zero capacity, bounded capacity, unbounded capacity

Exceptional conditions

Process terminates

Lost messages: OS detects and resends, sending process detects and resends, OS detects & process is given option

Detection: timeouts

Scrambled messages: CRC & checksums

CPU Scheduling

Objective: have a process running at all times

CPU-I/O burst cycle (CPU burst times graph)

Next process chosen to run when CPU idles is chosen by short-term scheduler

Scheduling points

1. State change from running to waiting

2. State change from running to ready

3. State change from waiting to ready

4. Process termination

Preemption: process losing the CPU against its will, cases 2 & 3

Dispatcher: performs context switch, switch back to user stack and mode, restart program

Criteria for measuring the performance of a scheduling algorithm

CPU utilization

Throughput: number of processes completed per unit time

Turnaround time: how long it takes a process to complete

Waiting time: in ready queue only

Response time: variance?

Scheduling algorithms

FCFS: average waiting times

Shortest job first: minimum average waiting time

Predicting burst times

Exponential average: $\tau_{n+1} = \alpha t_n + (1 - \alpha) \tau_n$

$t_n$: current burst

$\tau_n$: moving average of

past bursts

$\alpha$: relative weights of past and present

SJF is special case of priority scheduling

Priority scheduling implies potential starvation

Round-robin: Time-sharing w/ preemption

Smaller quantum implies more context switching

Real-time scheduling

Response vs. dispatch latency

Priority inversion & inheritance

Algorithm evaluation

Deterministic modeling & analytic evaluation

Queueing models

Simulations

Implementation & empirical observation

Process Synchronization

Background (see text)

Critical section problems >$" align="middle" border="0" width="29" height="28"> solutions must satisfy:

Mutual exclusion

Progress

Bounded waiting

Two-process solutions: Algorithm 1-3 (slides)

$N$-process solutions: Bakery Algorithm

Synchronization hardware: test-and-set (slides)

Semaphores

No busy-waiting

Semaphores have process qu

eues associated with them that waiting processes block in

(slides)

Deadlocks & starvation

Synchronization problems

Do not use the bounded buffer solution in text

Dining Philosophers

Monitors

Programmer-defined operations: internal data is only accessed through the operations

Access to operations is synchronized

Issues: programming language construct, easier to implement in a shared memory environment

Deadlock

Necessary conditions:

1. Mutual exclusion: ability of one process to hold a resource while other processes wait for the same resource

2. Hold and wait: ability of a process while holding one resource to block waiting for another

3. No resource preemption: inability of the system to revoke access to a resource

4. Circular wait: condition where a cycle of processes waiting for resources that are held by other proce

1. sses

Resource Allocation Graphs

Two node types: Resource nodes & Process nodes

Edge from process to resource: request

Edge from resource to process: allocation

Figure 7.1 from text

Prevention: Disallow one of the necessary conditions

1. Mutual exclusion: not possible

2. Hold and wait: grant all at once, request only when not holding (low utilization, starvation)

3. No resource preempti

1. on: allow resources allocated to a process that is waiting to be taken away

2. Circular wait: processes all request resources in the same order

Problems w/ prevention: low utilization, reduced throughput

Avoidance

Safe state: $" align="middle" border="0" width="126" height="29"> sequence of processes is safe if any $P_j$can obtain resources immediately or from any $P_i : i < j$

Example: text, page 218

Resource graph allocation algorithm:

Single resources only

Additional edge: claim edge, intent to request resource

Requests are granted only if converting edge does not form a cycle

Single resources only

Banker's algorithm

Multiple resource instances

Processes d

eclare the maximum number of instances of each resource

Data structures:

Available: vector of length $m$indicating the number of available resources of each type

Max: $n \times m$matrix defines max demand for each resource type by each process

Allocation: $n \times m$matrix tracks the number of resources of each

type allocated to each process

Need: $n \times m$matrix tracks the remaining resources of each type for each process




-Main Memory Management
Memory management
is the act of managing computer memory. In its simpler forms, this

involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system.

Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using disk swapping. The quality of the virtual memory manager can have a big impact on overall system performance.

Garbage collection is the automated allocation, and deallocation of computer memory resources for a program. This is generally implemented at the programming language level and is in opposition to manual memory management, the explicit allocation and deallocation of computer memory resources.

-Memory management is a tricky com

promise between performance (access time) and quantity (available space). We always seek the maximum available memory space but we are rarely prepared to compromise on performance.
Memory management must also perform the following functions:

  • allow memory sharing (for a multi-threaded system);
  • allocate blocks of memory space for different tasks;
  • protect the memory spaces used (e.g. prevent a user from changing a task performed by another user);
  • optimise the quantity of available memory, specifically via memory expansion systems.


-File Management











Also referred to as simply a file system or filesystem. The system that an operating system or program uses to organize and keep track of files. For example, a hierarchical file system is one that uses directories to organize files into a tree structure.

Although the operating system provides its own file management system, you can buy separate file management systems. These systems interact smoothly with the operating system but provide more features, such as improved backup procedures and stricter file protection.

-I/O System Management

A programmable computer user input/output (I/O) system having a multiple degree-of-freedom magnetic levitation (maglev) device with a matched electrodynamically levitated flotor and stator combination and an electrodynamic forcer means for receiving coil currents for applying controlled magnetic...

-Protection System

Protection System is rather annoyware, i.e commercial software that applies extremely annoying ads to convince users of the need to buy it (buying Protection System means registering this program for at least one year period). We have to stress on that registering Protection System is no escape from its alerts as the hackers’ design is to get endless cashflow from user who once agreed to pay. That is, registered Protection System will ask for updates and extended registration. There is thus ho way to get rid of Protection System’s noisy ads but to remove Protection System entirely.

-Command Interpreter System

The command interpreter (CI) is the program that acts as the interface between you and the operating system. It checks the MPE/iX commands that you enter for spelling and syntax errors. The CI then passes the command along to the appropriate system procedure for execution. Following execution, control returns to the CI, which becomes ready for another command.