THOUSANDS OF FREE BLOGGER TEMPLATES

Thursday, June 25, 2009

Hardware Protection
Hardware protection can accomplish various things, including: write protection for hard disk drives, memory protection, monitoring and trapping unauthorized system calls, etc. Again, no single tool will be foolproof and the "stronger" hardware-based protection is, the more likely it will interfere with the "normal" operation of your computer. The popular idea of write-protection (see D3) may stop viruses *spreading* to the disk that is protected, but doesn't, in itself, prevent a virus from *running*. Also, some existing hardware protection schemes can be easily bypassed, fooled, or disconnected, if the virus writer knows them well and designs a virus that is aware of the particular defense. The big problem with hardware protection is that there are few (if any) operations that a general-purpose computer can perform that are used by viruses *only*. Therefore, making a hardware protection system for such a computer typically involves deciding on some (small) set of operations that are "valid but not normally performed except by viruses", and designing the system to prevent these operations. Unfortunately, this means either designing limitations into the level of protection the hardware system provides or adding limitations to the computer's functionality by installing the hardware protection system. Much can be achieved, however, by making the hardware "smarter". This is double- edged: while it provides more security, it usually means adding a program in an EPROM to control it. This allows a virus to locate the program and to call it directly after the point that allows access. It is still possible to implement this correctly though--if this program is not in the address space of the main CPU, has its own CPU and is connected directly to the hard disk and the keyboard. As an example, there is a PC-based product called ExVira which does this and seems fairly secure, but it is a whole computer on an add-on board and is quite expensive.


Dual Mode Operation

-Sharing system resources requires operating system to insure that an incorrect program cannot cause other programs to execute incorrectly.

- Provide Hardware support to defferentiate between at least two modes of operations.

1. User mode - execution done behalf pf a user.
2. Monitor mode (also supervisor mode or system mode) - execution done on behalf of
operating system.


I/O Protection
-All I/O instructions are privileged instructions.• Must ensure that a user program could never gain control ofthe computer in monitor mode (i.e., a user program that, aspart of its execution, stores a new address in the interruptvector).



Memory Protection
• Must provide memory protection at least for the interrupt vectorand the interrupt service routines.• In order to have memory protection, add two registers thatdetermine the range of legal addresses a program may access:– base register – holds the smallest legal physical memoryaddress.– limit register – contains the size of the range.• Memory outside the deļ¬ned range is protected.

CPU Protection
In series with the CentralProcessing Unit (CPU), in some applications, is a Voltage RegulatorModule (VRM). A VRM DC-DCconverter supplies the requiredvoltage and current to a processor.

Problem/Solution
The VRM design approachremoves cable inductance fromthe distribution and reducesboard inductance. A load-changetransient occurs when coming outof or entering a low power mode.For some CPUs this load-changetransient can be on the order of13A. These are not only quickchanges in current demand, butalso long-lasting average currentrequirements. Even during nor-mal operation the currentdemand can still change by asmuch as 7A as activity levelschange within the processorcomponent. Maintaining voltagetolerance during these changesin current requires high-densitybulk capacitors with low EffectiveSeries Resistance (ESR). Thesehigh-current immediate demandson the circuits can cause compo-nents to fail. Circuit protectionprevents the VRM from damag-ing the CPU in the event of aVRM fault. If the VRM fails, theprocessor tries to pull too muchpower. A PolySwitch device canbe placed on the input pins to theVRMs that supply power to theprocessors, therefore protectingthe processors. If there is a fail-ure, only the VRM needs to bereplaced, rather than the moreexpensive CPU.Device SelectionUp to 12V and several amps areapplied to the circuit. The RGEseries, typically the RGE600–RGE900, is used in this application.PolySwitchdeviceVoltage RegulatorModuleProcessorPolySwitchdeviceVoltage RegulatorModuleProcessorPolySwitchdeviceVoltage RegulatorModuleProcessorPolySwitchdeviceVoltage RegulatorModuleProcessorPowerSupplyFigure 1. Typical Schematic

Storage Structure
-One of the first decisions in designing a server is what persistent storage structures to use. This section discusses how we organized storage structures for the grid example. There are numerous ways to organized storage. The primary decision is where to store the information about each item on the grid. We first discuss the implemented design and then present a couple alternatives.


Main memory
The main memory of the computer is also known as RAM, standing for Random Access Memory. It is constructed from integrated circuits and needs to have electrical power in order to maintain its information. When power is lost, the information is lost too! It can be directly accessed by the CPU. The access time to read or write any particular byte are independent of whereabouts in the memory that byte is, and currently is approximately 50 nanoseconds (a thousand millionth of a second). This is broadly comparable with the speed at which the CPU will need to access data. Main memory is expensive compared to external memory so it has limited capacity. The capacity available for a given price is increasing all the time. For example many home Personal Computers now have a capacity of 16 megabytes (million bytes), while 64 megabytes is commonplace on commercial workstations. The CPU will normally transfer data to and from the main memory in groups of two, four or eight bytes, even if the operation it is undertaking only requires a single byte.


Magnetic disks
Magnetic disks are flat circular plates of metal or plastic, coated on both sides with iron oxide. Input signals, which may be audio, video, or data, are recorded on the surface of a disk as magnetic patterns or spots in spiral tracks by a recording head while the disk is rotated by a drive unit. The heads, which are also used to read the magnetic impressions on the disk, can be positioned... A memory device, such as a floppy disk, a hard disk, or a removable cartridge, that is covered with a magnetic coating on which digital information is stored in the form of microscopically small, magnetized needles.





















Moving Head Disk Mechanism


















Rotation speeds: 60 to 200 rotations per second
Head crash: read-write head makes contact with the surface



Magnetic tape
Magnetic tape is a medium for magnetic recording generally consisting of a thin magnetizable coating on a long and narrow strip of plastic. Nearly all recording tape is of this type, whether used for recording audio or video or for computer data storage. It was originally developed in Germany, based on the concept of magnetic wire recording. Devices that record and playback audio and video using magnetic tape are generally called tape recorders and video tape recorders respectively. A device that stores computer data on magnetic tape can be called a tape drive, a tape unit, or a streamer.
Magnetic tape revolutionized the broadcast and recording industries. In an age when all radio (and later television) was live, it allowed programming to be prerecorded. In a time when gramophone records were recorded in one take, it allowed recordings to be created in multiple stages and easily mixed and edited with a minimal loss in quality between generations. It is also one of the key enabling technologies in the development of modern computers. Magnetic tape allowed massive amounts of data to be stored in computers for long periods of time and rapidly accessed when needed.
Today, many other technologies exist that can perform the functions of magnetic tape. In many cases these technologies are replacing tape. Despite this, innovation in the technology continues and tape is still widely used.
















Tuesday, June 23, 2009

Storage Hierarchy

The range of memory and storage devices within the computer system. The following list starts with the slowest devices and ends with the fastest. See storage and memory.

      VERY SLOW



Punch cards (obsolete)

Punched paper tape (obsolete)



FASTER



Bubble memory

Floppy disks



MUCH FASTER



Magnetic tape

Optical discs (CD-ROM, DVD-ROM, MO, etc.)

Magnetic disks with movable heads

Magnetic disks with fixed heads (obsolete)

Low-speed bulk memory



FASTEST



Flash memory

Main memory

Cache memory

Microcode

Registers






Storage Hierarchy


To clarify the ``guarantees'' provided at different settings of the persistence spectrum without binding the application to a specific environment or set of storage devices, MBFS implements the continuum, in part, with a logical storage hierarchy. The hierarchy is defined by N levels:



1.

LM (Local Memory storage): very high-speed volatile storage located on the machine creating the file.

2.

LCM (Loosely Coupled Memory storage): high-speed volatile storage consisting of the idle memory space available across the system.

3.

-N DA (Distributed Archival storage): slower speed stable storage space located across the system.
Logically, decreasing levels of the hierarchy are characterized by stronger persistence, larger storage capacity, and slower access times. The LM level is simply locally addressable memory (whether on or off CPU). The LCM level combines the idle memory of machines throughout the system into a loosely coupled, and constantly changing, storage space. The DA level may actually consist of any number of sub-levels (denoted DA1, DA2, ..., DAn) each of increasing persistence (or capacity) and decreasing performance. LM data will be lost if the current machine crashes or loses power. LCM data has the potential to be lost if one or more machines crash or lose power. DA data is guaranteed to survive power outages and machine crashes. Replication and error correction are provided at the LCM and DA levels to improve the persistence offered by those levels.

Each level of the logical MBFS hierarchy is ultimately implemented by a physical storage device. LM is implemented using standard RAM on the local machine and LCM using the idle memory of workstations throughout the network. The DA sub-levels must be mapped to some organization of the available archival storage devices in the system. The system administrator is expected to define the mapping via a system configuration file. For example, DA-1 might be mapped to the distributed disk system while DA-2 is mapped to the distributed tape system.


Because applications are written using the logical hierarchy, they can be run in any environment, regardless of the mapping. The persistence guarantees provided by the three main levels of the hierarchy (LM, LCM, DA1) are well defined. In general, applications can use the other layers of the DA to achieve higher persistence guarantees, without knowing the exact details of the persistence guaranteed; only that it is better. For applications that want to change their storage behavior based on the characteristics of the current environment, the details of each DA's persistence guarantees, such as the expected mean-time-till-failure, can be obtained via a stat() call to the file system. Thus, MBFS makes the layering abstraction explicit while hiding the details of the devices used to implement it. Applications can control persistence with or without exact knowledge of the characteristics of the hardware used to implement it. Once the desired persistence level has been selected, MBFS's loosely coupled memory system uses an addressing algorithm to distribute data to idle machines and employs a migration algorithm to move data off machines that change from idle to active. The details of the addressing and migration algorithms can be found in [15,14] and are also used by the archival storage levels. Finally, MBFS provides whole-file consistency via callbacks similar to Andrew[19] and a Unix security and protection model.


 
 
 
 
 
 
 
 
Caching
Caching is a well known concept in computer science: 
when programs continually access the same set of instructions, 
a massive performance benefit can be realized by 
storing those instructions in RAM. 
This prevents the program from having to access the 
disk thousands or even millions of times during execution 
by quickly retrieving them from RAM. 
Caching on the web is similar in that it 
avoids a roundtrip to the origin web server each time a resource 
is requested and instead retrieves the file from a 
local computer's browser cache or a proxy cache closer to the user.
 
The most commonly encountered caches on the web are the ones
 found in a user's web browser such as Internet Explorer, 
Mozilla and Netscape. When a web page, image, or JavaScript
 file is requested through the browser each one of these 
resources may be accompanied by HTTP header directives that 
tell the browser how long the object can be considered fresh, 
that is for how long the resource can be retrieved directly
 from the browser cache as opposed to from the origin or 
proxy server. Since the browser represents the cache closest
 to the end user it offers the maximum performance benefit 
whenever content can be stored there.
 
Coherency and consistency
Transactional Coherence and Consistency (TCC) offers a way to 
simplify parallel programming by executing all code in transactions.
 In TCC systems, transactions serve as the fundamental unit of 
parallel work, communication and coherence. As each transaction 
completes, it writes all of its newly produced state to shared 
memory atomically, while restarting other processors that have 
speculatively read from modified data. With this mechanism, 
a TCC-based system automatically handles data synchronization 
correctly, without programmer intervention. To gain the benefits 
of TCC, programs must be decomposed into transactions. 
Decomposing a program into transactions is largely a matter 
of performance tuning rather than correctness, and that a 
few basic transaction programming optimization techniques are 
sufficient to obtain good performance over a wide range of 
applications with little programmer effort. 

7. Difference Between RAM and DRAM

Random Access Memory (RAM)
is the "working memory" in a computer. Additional RAM allows a computer to work with more information at the same time which can have a dramatic effect on total system performance.

RAM is Also Known As
main memory, internal memory, primary storage, memory "stick", RAM "stick
"

Dynamic random access memory (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an IC. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory. And unlike non-volatile firmware chips, both DRAM and SRAM lose their content when the power is turned off.

6. Direct Memory Access (DMA)

Direct memory access (DMA) is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time and allowing computation and data transfer concurrency.

Without DMA, using programmed input/output (PIO) mode for communication with peripheral devices, or load/store instructions in the case of multicore chips, the CPU is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU would initiate the transfer, do other operations while the transfer is in progress, and receive an interrupt from the DMA controller once the operation has been done. This is especially useful in real-time computing applications where not stalling behind concurrent operations is critical. Another and related application area is various forms of stream processing where it is essential to have data processing and transfer in parallel, in order to achieve sufficient throughput.

5. Device Status Table

Required Parameter Group:

1 Receiver variable Output Char(*)
2 Length of receiver variable Input Binary(4)
3 Format name Input Char(8)
4 Device description Input Char(10)
5 Resource name Input Char(10)
6 Error code I/O Char(*)

Default Public Authority: *USE

Threadsafe: No

The Retrieve Device Status (QTARDSTS) API retrieves dynamic status information for the specified device and for any currently mounted tape cartridge. The device description must be varied on. The resource that is associated with a specified tape media library device description must currently exist on the system.

Note: If the device status has been changed by a manual operation or by another system sharing the device, the information will not be accurate.

The QTARDSTS API currently supports the following device types:

  • Tape (TAP) devices
  • Tape media library (TAPMLB) devices

4. User Mode

user
mode
contains the userhelper program, which can be used to allow configured programs to be run with superuser privileges by ordinary users, and several graphical tools for users:

  • userinfo allows users to change their finger information.
  • usermount lets users mount, unmount, and format filesystems.
  • userpasswd allows users to change their passwords.

3. Monitor Mode

Monitor mode
, or RFMON (Radio Frequency Monitor) mode, allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.

2. Difference of interrupt and trap and their use.

An interrupt is generally initiated by an I/O device, and causes the CPU to stop what it's doing, save its context, jump to the appropriate interrupt service routine, complete it, restore the context, and continue execution. For example, a serial device may assert the interrupt line and then place an interrupt vector number on the data bus. The CPU uses this to get the serial device interrupt service routine, which it then executes as above.

A trap is usually initiated by the CPU hardware. When ever the trap condition occurs (on arithmetic overflow, for example), the CPU stops what it's doing, saves the context, jumps to the appropriate trap routine, completes it, restores the context, and continues execution. For example, if overflow traps are enabled, adding two very large integers would cause the overflow bit to be set AND the overflow trap service routine to be initiated.

1. Bootstrap program

In computing, booting is a bootstrapping process that starts operating systems when the user turns on a computer system.

Most computer systems can only execute code found in the memory (ROM or RAM); modern operating systems are mostly stored on hard disk drives, LiveCDs and USB flash drive. Just after a computer has been turned on, it doesn't have an operating system in memory. The computer's hardware alone cannot perform complicated actions of the operating system, such as loading a program from disk on its own; so a seemingly irresolvable paradox is created: to load the operating system into memory, one appears to need to have an operating system already installed.