Thread

3:11 AM Posted In Edit This 0 Comments »


  • Single-threaded process has one program counter specifying location of next instexecuteO Process executes instructions sequentially, one at a time, until completion
  • Multi-threaded process has one program counter per thread



Benifits of Multithreaded process


User Threads

*Thread management done by user-level threads library

*Three primary thread libraries:

*POSIX Pthreads
*Win32 threads
*Java thread


Kernel Threads

Supported by the Kernel

Examples

Windows XP/2000

Solaris

Linux

Tru64 UNIX

Mac OS


Thread library


The threads library allows concurrent programming in Objective Caml. It provides multiple threads of control (also called lightweight processes) that execute concurrently in the same memory space. Threads communicate by in-place modification of shared data structures, or by sending and receiving data on communication channels



-it is implemented by time-sharing on a single processor. It will not take advantage of multi-processor machines. Using this library will therefore never make programs run faster. However, many programs are easier to write when structured as several communicating processes.


Multithreading Models



The supports for threads can be provided either at the user level, called user threads, or by the kernel, called kernel threads, in which kernel threads are supported and managed directly by the OS. All contemporary OS support kernel threads. „ Ultimately, there must be a relationship between user threads and kernel threads, and there are three common ways of establishing relationships




*Many-to-One








many user threads are mapped into one kernel thread (Fig 4.2 in textbook on page 126, slide 3.45). Thread management is done by the thread library in use space, so this is very efficient; but the entire process will be blocked if one thread makes a blocking system call since only one thread can access the kernel at one time. Also multiple threads are unable to in parallel on multi-processors.







Many user-level threads mapped to single kernel thread

Examples:

Solaris Green Threads

GNU Portable Threads








*One-to-One




each user thread is mapped into one kernel thread. It provides maximum concurrency by allowing another thread to run when a thread (currently running) is blocked; It also allows multiple threads (within a process) to run in parallel on multiprocessors. The drawback is that the creation of a user thread requires the creation of the corresponding kernel thread. There is usually more overhead creating kernel thread than creating user thread, and most implementation of this model restricts the number of threads that can be supported by the system.

Each user-level thread maps to kernel thread

Examples

Windows NT/XP/2000

Linux

Solaris 9 and later








*Many-to-Many Model




this typically allows many user-level threads to be mapped to a smaller or equal number of kernel threads, which is a hybrid model of the first two models. It provides better concurrency than many-to-one model (less than one-to-one model), yet is flexible in that it can create many user-threads not restricted by the number of kernel threads

Allows many user level threads to be mapped to many kernel
threads

Allows the operating system to create a sufficient number of
kernel threads

Solaris prior to version 9

Windows NT/2000 with the
ThreadFiber package
























Interprocess Communication

2:09 AM Posted In Edit This 0 Comments »
Interprocess Communication

Direct communication:The sender and receiver can communicate in either ofthe following forms:• synchronous—involved processes synchronize at everymessage. Both send and receive are blockingoperations. This form is also known as a rendezvous.• asynchronous—the send operation is almost alwaysnon-blocking. The receive operation, however, can haveblocking (waiting) or non-blocking (polling) variants. Processes must explicitly name the receiver or senderof a message (symmetric addressing).– send (P, message). Send message to process P.– receive (Q, message). Receive message from Q.In a client-server system, the server does not have toknow the name of a specific client in order to receivea message. In this case, a variant of the receiveoperation can be used (asymmetric addressing).– listen (ID, message). Receive a pending (posted)message from any process; when a message arrives,ID is set to the name of the sender. In this form of communication the interconnectionbetween the sender and receiver has the followingcharacteristics:• A link is established automatically, but the processesneed to know each other’s identity.• A unique link is associated with the two processes.• Each pair of processes has only one link between them.• The link is usually bi-directional, but it can be uni-directional.

Indirect communicationIn case of indirect communication, messages are sentto mailboxes, which are special repositories. Amessage can then be retrieved from this repository.– send (A, message). Send a message to mailbox A.– receive (A, message). Receive a message frommailbox A.This form of communication decouples the senderand receiver, thus allowing greater flexibility.Generally, a mailbox is associated with many sendersand receivers. In some systems, only one receiver is(statically) associated with a particular mailbox; sucha mailbox is often called a port.
A process that creates a mailbox is the owner
(sender). Mailboxes are usually managed by the
system.
The interconnection between the sender and receiver
has the following characteristics:
• A link is established between two processes only if they
“ share” a mailbox.
• A link may be associated with more than two processes.
• Communicating processes may have different links
between them, each corresponding to one mailbox.
• The link is usually bi-directional, but it can be uni-
directional.



Synchronization
•  Message passing may be either blocking or non-
blocking
•  Blocking is considered synchronous
–  Blocking send has the sender block until the message is
received
–  Blocking receive has the receiver block until a message
is available
•  Non-blocking is considered asynchronous
–  Non-blocking send has the sender send the message
and continue
–  Non-blocking receive has the receiver receive a valid
message or null


Buffering
•  Queue of messages attached to the link;
implemented in one of three ways

1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n message
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits


Producer-Consumer Example

The Producer and Consumer examples share data through a common CubbyHole object. Although, ideally, Consumer will get each value produced once and only once, neither Producer nor Consumer makes any effort whatsoever to ensure that this happens. The synchronization between these two threads occurs at a lower level within the get and put methods of the CubbyHole object. Assume for a moment, however, that the two threads make no arrangements for synchronization; let's discuss the potential problems that might arise from this.

The producer-consumer problem illustrates the need for synchronization in systems where many processes share a resource. In the problem, two processes share a fixed-size buffer. One process produces information and puts it in the buffer, while the other process consumes information from the buffer. These processes do not take turns accessing the buffer, they both work concurrently. Herein lies the problem.


The consumer checks to see if the buffer is empty. If so, the consumer will put itself to sleep until the producer wakes it up. A "wakeup" will occur if the producer finds the buffer empty after it puts an item into the buffer. (2) Then, the consumer will remove a widget from the buffer. The consumer will never try to remove a widget from an empty buffer because it will not wake up until the buffer is full.(3) If the buffer was full before it removed the widget, the consumer will wake the producer. (4) Finally, the consumer will consume the widget. As was the case with the producer, an interrupt could occur between any of these steps, allowing the producer to run.

Concept of Process

3:48 AM Posted In Edit This 1 Comment »
1. Concept of process Process Concept An operating system executes a variety of programs: Batch system – jobs Time-shared systems – user programs or tasks Process – a program in execution; process execution must progress in sequential fashion Note: process (active entity) is different from program (passive entity). Several processes may be instances of the same program. A process includes, e.g.: program counter Stack
data section
a. Process State During the lifespan of a process, its execution status may be in one of four states: (associated with each state is usually a queue on which the process resides) Executing: the process is currently running and has control of a CPU Waiting: the process is currently able to run, but must wait until a CPU becomes available Blocked: the process is currently waiting on I/O, either for input to arrive or output to be sent Suspended: the process is currently able to run, but for some reason the OS has not placed the process on the ready queue Ready: the process is in memory, will execute given CPU time. b.Control block If the OS supports multiprogramming, then it needs to keep track of all the processes. For each process, its process control block PCB is used to track the process's execution status, including the following: Its current processor register contents Its processor state (if it is blocked or ready) Its memory state A pointer to its stack Which resources have been allocated to it Which resources it needs c.Threads Despite of the fact that a thread must execute in process, the process and its associated threads are different concept. Processes are used to group resources together and threads are the entities scheduled for execution on the CPU.A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. In many respect, threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will generally call different procedures and thus a different execution history. This is why thread needs its own stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources also known as task, such as open files and signals. Processes Vs Threads As we mentioned earlier that in many respect threads operate in the same way as that of processes. Some of the similarities and differences are: Similarities Like processes threads share CPU and only one thread active (running) at a time. Like processes, threads within a processes, threads within a processes execute sequentially. Like processes, thread can create children. And like process, if one thread is blocked, another thread can run. Differences Unlike processes, threads are not independent of one another. Unlike processes, all threads can access every address in the task . Unlike processes, thread are design to assist one other. Note that processes might or might not assist one another because processes may originate from different users.


In multiprogramming OS, multiple jobs are held in memory and alternate between using the CPU, using I/O, and waiting (idle) The key to high efficiency with multiprogramming is effective scheduling – High-level – Short-term – I/O High-level scheduling – Determines which jobs are admitted into the system for processing – Controls the degree of multiprocessing – Admitted jobs are added to the queue of pending jobs that is managed by the short-termscheduler – Works in batch or interactive modes Short-term scheduling – This OS segment runs frequently and determines which pending job will receive the CPU’s attention next – Based on the normal changes of state that a job/process goes through – A process is running in the CPU until: +It issues a service call to the OS (e.g., for I/O service) + Process is suspended until the request is satisfied. Process causes and interrupt and is Suspended. External event causes interrupt – Short-term scheduler is invoked to determine which process is serviced next.


∗ cpu executes a process

∗ Kernel suspends process when its time quantum elapses

∗ Kernel schedules another process to execute

∗ Kernel later reschedules the suspended processAllocates main memory for an executing process


2. Process Scheduling
a. Scheduling Queues

Job queue – set of all processes in the system Ready queue – set of all processes residing in main memory, ready and waiting to execute Device queues – set of processes waiting for an I/O device Processes migrate among the various queues


b.Schedulers
is a kernel scheduling design that can schedule processes within a constant amount of time, regardless of how many processes are running on the operating system (OS). One of the major goals of operating system designers is to minimize overhead and jitter of OS services, so that application programmers who use them endure less of a performance impact. An O(1) scheduler provides "constant time" scheduling services, thus reducing the amount of jitter normally incurred by the invocation of the scheduler. In the realm of real-time operating systems, deterministic execution is key, and an O(1) scheduler is able to provide scheduling services with a fixed upper-bound on execution times.

c.context switch
context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. The context switch is an essential feature of a multitasking operating system. Context switches are usually computationally intensive and much of the design of operating systems is to optimize the use of context switches. A context switch can mean a register context switch, a task context switch, a thread context switch, or a process context switch. What constitutes the context is determined by the processor and the operating system.
In a context switch, the state of the first process must be saved somehow, so that, when the scheduler gets back to the execution of the first process, it can restore this state and continue. The state of the process includes all the registers that the process may be using, especially the program counter, plus any other operating system specific data that may be necessary. Often, all the data that is necessary for state is stored in one data structure, called a switchframe or a process control block. Now, in order to switch processes, the switchframe for the first process must be created and saved. The switchframes are sometimes stored upon a per-process stack in kernel memory (as opposed to the user-mode stack), or there may be some specific operating system defined data structure for this information. Since the operating system has effectively suspended the execution of the first process, it can now load the switchframe and context of the second process. In doing so, the program counter from the switchframe is loaded, and thus execution can continue in the new process. New processes are chosen from a queue or queues. Process and thread priority can influence which process continues execution, with processes of the highest priority checked first for ready threads to execute.





3.Process Operation
a.Process Creation
In general-purpose systems, some way is needed to create processes as needed during operation. There are four principal events led to processes creation. System initialization. Execution of a process Creation System calls by a running process. A user request to create a new process. Initialization of a batch job. Foreground processes interact with users. Background processes that stay in background sleeping but suddenly springing to life to handle activity such as email, webpage, printing, and so on. Background processes are called daemons. This call creates an exact clone of the calling process.A process may create a new process by some create process such as 'fork'. It choose to does so, creating process is called parent process and the created one is called the child processes. Only one parent is needed to create a child process. Note that unlike plants and animals that use sexual representation, a process has only one parent. This creation of process (processes) yields a hierarchical structure of processes like one in the figure. Notice that each child has only one parent but each parent may have many children. After the fork, the two processes, the parent and the child, have the same memory image, the same environment strings and the same open files. After a process is created, both the parent and child have their own distinct address space. If either process changes a word in its address space, the change is not visible to the other process.

b.Process Termination
A process terminates when it finishes executing its last statement. Its resources are returned to the system, it is purged from any system lists or tables, and its process control block (PCB) is erased i.e., the PCB's memory space is returned to a free memory pool. The new process terminates the existing process, usually due to following reasons: Normal Exist Most processes terminates because they have done their job. This call is exist in UNIX. Error Exist When process discovers a fatal error. For example, a user tries to compile a program that does not exist. Fatal Error An error caused by process due to a bug in program for example, executing an illegal instruction, referring non-existing memory or dividing by zero. Killed by another Process A process executes a system call telling the Operating Systems to terminate some other process. In UNIX, this call is kill. In some systems when a process kills all processes it created are killed


4.Cooperating Processes

Cooperating process can affect or be affected by the execution of another process • Advantages of process cooperation – Information sharing – Computation speed-up – Modularity – Convenience Advantages of process cooperation Information sharing Computation speed-up Modularity Convenience Independent process cannot affect/be affected by the execution of another process, cooperating ones can Issues Communication Avoid processes getting into each other’s way Ensure proper sequencing when there are dependencies Common paradigm: producer-consumer unbounded-buffer - no practical limit on the size of the buffer bounded-buffer - assumes fixed buffer size


5.Interprocess communication
For communication and synchronization Shared memory OS provided IPC Message system no need for shared variable two operations send(message) – message size fixed or variable receive(message) If P and Q wish to communicate, they need to establish a communication link between them exchange messages via send/receive Implementation of communication link physical (e.g., shared memory, hardware bus) logical (e.g., logical properties)

Quiz

3:58 AM Posted In Edit This 1 Comment »
1.What are the major activities of an Operating System with regards to process Management?

=
-Process creation and deletion.
-Process suspention and resumption.
-Process communication.
-deadluck handling.


2.What are the major activities of an operating system with regards to Memory Management?

=
-The operating system manage the main memory.
-The operating system keep track the part of memory that being used.
-alocate and deallocate memory space.
-Keep track of which parts of memory are currently being used and by whom.

-Decide which processes to load when memory space becomes available - long term or medium term scheduler.



3.What are the major activities of an operating system with regards to Secondary Storage Management?

=
-Storage allocation
  • Disk scheduling
  • minimize seeks (arm movement … very slow operation)
  • Disk as the media for mapping virtual memory space
  • Disk caching for performance

  • 4.What are the major activities of an operating system with regards File Management?

    =
    -File creation and deletion.
    -Directory creation and deletion.
    -Support of primitives for manipulating files and directories.
    -Mapping files onto secondary storage.
    -File backup on stable (nonvolatile) storage media.

    5.What is the purpose of the command interpreter?

    =
    -Protection

    -I/O handling
    -File system access
    -Networking

    System Boot

    3:39 AM Posted In Edit This 0 Comments »
    The typical computer system boots over and over again with no problems, starting the computer's operating system (OS) and identifying its hardware and software components that all work together to provide the user with the complete computing experience. But what happens between the time that the user powers up the computer and when the GUI icons appear on the desktop?
    In order for a computer to successfully boot, its BIOS, operating system and hardware components must all be working properly; failure of any one of these three elements will likely result in a failed boot sequence.
    When the computer's power is first turned on, the CPU initializes itself, which is triggered by a series of clock ticks generated by the system clock. Part of the CPU's initialization is to look to the system's ROM BIOS for its first instruction in the startup program. The ROM BIOS stores the first instruction, which is the instruction to run the power-on self test (POST), in a predetermined memory address. POST begins by checking the BIOS chip and then tests CMOS RAM. If the POST does not detect a battery failure, it then continues to initialize the CPU, checking the inventoried hardware devices (such as the video card), secondary storage devices, such as hard drives and floppy drives, ports and other hardware devices, such as the keyboard and mouse, to ensure they are functioning properly.

    System Generation

    3:38 AM Posted In Edit This 0 Comments »
    An operational system is a combination of the z/TPF system, application programs, and people. People assign purpose to the system and use the system. The making of an operational system depends on three interrelated concepts:

    The first two items are sometimes collectively called system generation; also installing and implementing. System definition is sometimes called design. System restart is the component that uses the results of a system generation to place the system in a condition to process real-time input. The initial startup is a special case of restart and for this reason system restart is sometimes called initial program load, or IPL. System restart uses values found in tables set up during system generation and changed during the online execution of the system. A switchover implies shifting the processing load to a different central processing complex (CPC), and requires some additional procedures on the part of a system operator. A restart or switchover may be necessary either for a detected hardware failure, detected software failure, or operator option. In any event, system definition (design), initialization, restart, and switchover are related to error recovery. This provides the necessary background to use this information, which is the principal reference to be used to install the z/TPF system.
    Performing a system generation requires a knowledge of the z/TPF system structure, system tables, and system conventions, a knowledge of the applications that will be programmed to run under the system, and a user's knowledge of z/OS. Knowledge of the z/TPF system, Linux, and the application are required to make intelligent decisions to accomplish the system definition of a unique z/TPF system environment. The use of z/OS and Linux is necessary because many programs used to perform system generation run under control of z/OS or Linux. Although this information does not rely on much z/OS or Linux knowledge, when the moment arrives to use the implementation information, the necessary z/OS and Linux knowledge must be acquired. You are assumed to have some knowledge of the S/370 assembly program as well as jargon associated with the z/OS and Linux operating systems. Some knowledge of C language is also helpful, because some of the programs that are used to generate the system are written in C.

    Virtual Machine

    3:10 AM Posted In Edit This 0 Comments »
    Implementation

    In the IT Industry, implementation refers to post-sales process of guiding a client from purchase to use of the software or hardware that was purchased. This includes Requirements Analysis, Scope Analysis, Customizations, Systems Integrations, User Policies, User Training and Delivery. These steps are often overseen by a Project Manager using Project Management Methodologies set forth in the Project Management Body of Knowledge. Software Implementations involve several professionals that are relatively new to the knowledge based economy such as Business Analysts, Technical Analysts, Solutions Architect, and Project Managers.

    benifits


    *Designed for virtual machines running on Windows Server 2008 and Microsoft Hyper-V ServerHyper-V is the next-generation hypervisor-based virtualization platform from Microsoft, which is designed to offer high performance, enhanced security, high availability, scalability, and many other improvements. VMM is designed to take full advantage of these foundational benefits through a powerful yet easy-to-use console that streamlines many of the tasks necessary to manage virtualized infrastructure. Even better, administrators can manage their traditional physical servers right alongside their virtual resources through one unified console.

    *
    Support for Microsoft Virtual Server and VMware ESXWith this release, VMM now manages VMware ESX virtualized infrastructure in conjunction with the Virtual Center product. Now administrators running multiple virtualization platforms can rely on one tool to manage virtually everything. With its compatibility with VMware VI3 (through Virtual Center), VMM now supports features such as VMotion and can also provide VMM-specific features like Intelligent Placement to VMware servers.

    *Performance and Resource Optimization (PRO) Performance and Resource Optimization (PRO) enables the dynamic management of virtual resources though Management Packs that are PRO enabled. Utilizing the deep monitoring capabilities of System Center Operations Manager 2007, PRO enables administrators to establish remedial actions for VMM to execute if poor performance or pending hardware failures are identified in hardware, operating systems, or applications. As an open and extensible platform, PRO encourages partners to design custom management packs that promote compatibility of their products and solutions with PRO’s powerful management capabilities.

    *Maximize datacenter resources through consolidation A typical physical server in the datacenter operates at only 5 to 15 percent CPU capacity. VMM can assess and then consolidate suitable server workloads onto virtual machine host infrastructure, thus freeing up physical resources for repurposing or hardware retirement. Through physical server consolidation, continued datacenter growth is less constrained by space, electrical, and cooling requirements.

    Examples

    Examples are PVM (Parallel Virtual Machine ) and MPI ...

    System Structure

    3:08 AM Posted In Edit This 0 Comments »
    Simple Structure
    Each level performs a related subset of functions
    Each level relies on the next lower level to perform more primitive functions
    This decomposes a problem into a number of more manageable subproblems


    Layered Approach
    The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.
    With modularity, layers are selected such that each uses functions (operations) and services of only lower-level layers.

    System Call

    2:55 AM Posted In Edit This 0 Comments »
    Process control

    System calls provide the interface between a running program and the operating systeml Generally available as assembly-language instructionsl Languages defined to replace assembly language for systems programming allow system calls to be made directly (e.g., C, C++)Three general methods are used to pass parametersbetween a running program and the operating systeml Pass parameters in registersl Store the parameters in a table in memory, and the table address is passed as a parameter in a registerl Push (store) the parameters onto the stack by the program, and pop off the stack by operating system

    File Management


    A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data.-The operating system is responsible for the following activities in connections with file management:1.File creation and deletion.2.Directory creation and deletion.3.Support of primitives for manipulating files and directories.4.Mapping files onto secondary storage.5.File backup on stable (nonvolatile) storage media.

    Device Management
    is a set of technologies, protocols and standards used to allow the remote management of mobile devices, often involving updates of firmware over the air (FOTA). The network operator, handset OEM or in some cases even the end-user (usually via a web portal) can use Device Management, also known as Mobile Device Management, or MDM, to update the handset firmware/OS, install applications and fix bugs, all over the air. Thus, large numbers of devices can be managed with single commands and the end-user is freed from the requirement to take the phone to a shop or service center to refresh or update.
    For companies, a Device Management system means better control and safety as well as increased efficiency, decreasing the possibility for device downtime. As the number of smart devices increases in many companies today, there is a demand for managing, controlling and updating these devices in an effective way. As mobile devices have become true computers over the years, they also force organizations to manage them properly. Without proper management and security policies, mobile devices pose threat to security: they contain lots of information, while they may easily get into wrong hands. Normally an employee would need to visit the IT / Telecom department in order to do an update on the device. With a Device Management system, that is no longer the issue. Updates can easily be done "over the air". The content on a lost or stolen device can also easily be removed by "wipe" operations. In that way sensitive documents on a lost or a stolen device do not arrive in the hands of others.

    Information Mainhtainance

    Get time and date, set time and date, get process attribute etc.

    Operating system

    2:52 AM Posted In Edit This 0 Comments »
    The computer that controls the microwave oven in your kitchen, for example, doesn't need an operating system. It has one set of tasks to perform, very straightforward input to expect (a numbered keypad and a few pre-set buttons) and simple, never-changing hardware to control. For a computer like this, an operating system would be unnecessary baggage, driving up the development and manufacturing costs significantly and adding complexity where none is required. Instead, the computer in a microwave oven simply runs a single hard-wired program all the time.

    System Components

    3:14 AM Posted In Edit This 0 Comments »
    Operating System Process Management





    In operating systems, process is defined as “A program in execution” [10]. Process can be considered as an entity that consists of a number of elements, including: identifier, state, priority, program counter, memory pointer, context data, and I/O request. The above information about a process is usually stored in a data structure, typically called process block. Figure 1 shows a simplified process block [10]. Because process management involves scheduling (CPU scheduling, I/O scheduling, and so on), state switching, and resource management, process block is one of the most commonly accessed data type in operating system. Its design directly affects the efficiency of the operating system. As a result, in most operating systems, there is a data object that contains information about all the current active processes. It is called process controller. Figure 2 shows the structure of a process controller [10], which is implemented as a linked-list of process blocks.

    A process is a program in executionl A process needs certain resources, including CPU time, memory, files, and I/O devices, to accomplish its taskn The operating system is responsible for the following activities in connection with process managementl Process creation and deletionl Process suspension and resumptionl Provision of mechanisms for:4process synchronization4process communication







    Main Memory Management



    Memory is a large array of words or bytes, each with its own addressl It is a repository of quickly accessible data shared by the CPU and I/O devicesMain memory is a volatile storage device. It loses its contents in the case of system failureThe operating system is responsible for the following activities in connections with memory managementl Keep track of which parts of memory are currently being used and by whoml Decide which processes to load when memory space becomes availablel Allocate and deallocate memory space as needed







    file management system



    a computer program that provides a user interface to work with file systems. The most common operations used are create, open, edit, view, print, play, rename, move, copy, delete, attributes, properties, search/find, and permissions. Files are typically displayed in a hierarchy. Some file managers contain features inspired by web browsers, including forward and back navigational buttons.





























    I/O System Management



    he I/O system consists of:l A buffer-caching system l A general device-driver interfacel Drivers for specific hardware devices




    Secondary Storage Management

    Secondary storage management is a classical feature of database management systems. It is usually supported through a set of mechanisms. These include index management, data clustering, data buffering, access path selection and query optimization.
    None of these is visible to the user: they are simply performance features. However, they are so critical in terms of performance that their absence will keep the system from performing some tasks (simply because they take too much time). The important point is that they be invisible. The application programmer should not have to write code to maintain indices, to allocate disk storage, or to move data between disk and main memory. Thus, there should be a clear independence between the logical and the physical level of the system.

    Since main memory (primary storage) is volatile and too small to accommodate all data and programs permanently, the computer system must provide secondary storage to back up main memoryMost modern computer systems use disks as the principle on-line storage medium, for both programs and dataThe operating system is responsible for the following activities in connection with disk management: l Free space managementl Storage allocationl Disk scheduling

    Protection System

    An active protection system, or APS, protects a tank or other armoured fighting vehicle from incoming fire before it hits the vehicle's armour. There are two general categories: soft kill systems, which use jamming or decoys to confuse a missile's guidance system, and hard kill systems, which attempt to detect and destroy incoming projectiles.

    Protection refers to a mechanism for controlling access by programs, processes, or users to both system and user resourcesThe protection mechanism must: l distinguish between authorized and unauthorized usagel specify the controls to be imposedl provide a means of enforcement

    Command interpreter system

    A command interpreter is the part of a computer operating system that understands and executes commands that are entered interactively by a human being or from a program. In some operating systems, the command interpreter is called the shell.

    Many commands are given to the operating system by control statements which deal with:l Process creation and managementl I/O handlingl Secondary-storage managementl Main-memory managementl File-system access l Protection l Networking