PLC : SESSION 10 (Concurrency)

Concurrency

Concurrency can occur at four levels, those are machine instruction level, high level language statement level, unit level, and program level. Instruction level is executing two or more machine instructions simultaneously. High level language statement level is executing two or more high-level language statements simultaneously. Unit level is executing two or more subprogram units simultaneously. Program level is executing two or more programs simultaneously. Because no language design issues are involved with them, instruction-level and program-level concurrency are not discussed in this chapter.

Concurrency at both the subprogram and the statement levels is discussed, with most of the focus on the subprogram level. Concurrency may appear to be a simple concept, but it presents significant challenges to the programmer, the programming language designer, and the operating system designer. Concurrent control mechanisms increase programming flexibility. They were originally invented to be used for particular problems faced in operating systems, but they are required for a variety of other programming applications.

Subprogram-Level Concurrency

A task is a unit of a program, that can be in concurrent execution with other units of the same program. Each task in a program can support one thread of control. Tasks are sometimes called processes. Three characteristics of tasks distinguish them from subprograms. First, a task may be implicitly started, whereas a subprogram must be explicitly called. Second, when a program unit invokes a task, in some cases it need not wait for the task to complete its execution before continuing its own. Third, when the execution of a task is completed, control may or may not return to the unit that started that execution.

Categories of Tasks

Tasks fall into two general categories: heavyweight and lightweight. Simply stated, a heavyweight task executes in its own address space. Lightweight tasks all run in the same address space. It is easier to implement lightweight tasks than heavyweight tasks. Furthe

rmore, lightweight tasks can be more efficient than heavyweight tasks, because less effort is required to manage their execution.

Task Synchronization

Synchronization is a mechanism that controls the order in which tasks execute. Two kinds of synchronization are required when tasks share data: cooperation and competition. Cooperation synchronization is required between task A and task B when task A must wait for task B to complete some specific activity before task A can begin or continue its execution. Competition synchronization is required between two tasks when both require the use of some resource that cannot be simultaneously used.

Task Execution States

There are five task execution states, those are new, ready, running, blocked, and dead. New is when a task is in the new state when it has been created but has not yet begun its execution; Ready is when a ready task is ready to run but is not currently running, either it has not been given processor time by the scheduler, or it had run previously but was blocked in one of the ways; Running is when a running task is one that is currently executing, that is, it has a processor and its code is being executed; Blocked is when a task that is blocked has been running, but that execution was interrupted by one of several different events, the most common of which is an input or output operation; and Dead is when a dead task is no longer active in any sense.

Liveness and Deadlock

Associated with the concurrent execution of tasks and the use of shared resources is the concept of liveness. In the environment of sequential programs, a program has the liveness characteristic if it continues to execute, eventually leading to completion. In more general terms, liveness means that if some event—say, program completion—is supposed to occur, it will occur, eventually. That is, progress is continually made. In a concurrent environment and with shared resources, the liveness of a task can cease to exist, meaning that the program cannot continue and thus will never terminate.

Neither relinquishes the resource it possesses, and as a result, both lose their liveness, guaranteeing that execution of the program will never complete normally. This particular kind of loss of liveness is called deadlock. Deadlock is a serious threat to the reliability of a program, and therefore its avoidance demands serious consideration in both language and program design.

Methods of Providing Synchronization

There are three methods of providing synchronization, those are semaphores, monitors, and message passing.

Semaphores

To provide limited access to a data structure, guards can be placed around the code that accesses the structure. A guard is a linguistic device that allows the guarded code to be executed only when a specified condition is true. So, a guard can be used to allow only one task to access a shared data structure at a time. A semaphore is an implementation of a guard. Specifically, a semaphore is a data structure that consists of an integer and a queue that stores task descriptors. A task descriptor is a data structure that stores all of the relevant information about the execution state of a task.

Monitors

One solution to some of the problems of semaphores in a concurrent environment is to encapsulate shared data structures with their operations and hide their representations—that is, to make shared data structures abstract data types with some special restrictions. This solution can provide competition synchronization without semaphores by transferring responsibility for synchronization to the run-time system. The first programming language to incorporate monitors was Concurrent Pascal (Brinch Hansen, 1975). Modula (Wirth, 1977), CSP/k (Holt et al., 1978), and Mesa (Mitchell et al., 1979) also provide monitors.

Message Passing

The first efforts to design languages that provide the capability for message passing among concurrent tasks were those of Brinch Hansen (1978) and Hoare (1978). These pioneer developers of message passing also developed a technique for handling the problem of what to do when multiple simultaneous requests were made by other tasks to communicate with a given task. It was decided that some form of nondeterminism was required to provide fairness in choosing which among those requests would be taken first. This fairness can be defined in various ways, but in general, it means that all requesters are provided an equal chance of communicating with a given task (assuming that every requester has the same priority). Nondeterministic constructs for statement-level control, called guarded commands, were introduced by Dijkstra (1975).

This entry was posted in Programming Language Concept. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *