Dec
29
2017
0

Session XIII : Logic Programming Language

In this Session XIII : Logic Programming Language, there are 7 subtopics:

  • Introduction
  • A Brief Introduction to Predicate Calculus
  • An Overview of Logic Programming
  • The Origins of Prolog
  • The Basic Elements of Prolog
  • Deficiencies of Prolog
  • Applications of Logic Programming

Introduction

Programming that uses a form of symbolic logic as a programming language is often called logic programming, and languages based on symbolic logic are called logic programming languages, or declarative languages. We have chosen to describe the logic programming language Prolog, because it is the only widely used logic language. The syntax of logic programming languages is remarkably different from that of the imperative and functional languages. The semantics of logic programs also bears little resemblance to that of imperative-language programs. These observations should lead the reader to some curiosity about the nature of logic programming and declarative languages.

 

A Brief Introduction to Predicate Calculus

A proposition can be thought of as a logical statement that may or may not be true. It consists of objects and the relationships among objects. Formal logic was developed to provide a method for describing propositions, with the goal of allowing those formally stated propositions to be checked for validity.

Symbolic logic can be used for the three basic needs of formal logic: to express propositions, to express the relationships between propositions, and to describe how new propositions can be inferred from other propositions that are assumed to be true. Particular form of symbolic logic used for logic programming called predicate calculus

Propositions

The objects in logic programming propositions are represented by simple terms, which are either constants or variables. A constant is a symbol that represents an object. A variable is a symbol that can represent different objects at different times, although in a sense that is far closer to mathematics than the variables in an imperative programming language. The simplest propositions, which are called atomic propositions, consist of compound terms. A compound term is one element of a mathematical relation, written in a form that has the appearance of mathematical function notation.

Clausal Form

One problem with predicate calculus as we have described it thus far is that there are too many different ways of stating propositions that have the same meaning; that is, there is a great deal of redundancy. This is not such a problem for logicians, but if predicate calculus is to be used in an automated (computerized) system, it is a serious problem. To simplify matters, a standard form for propositions is desirable. Clausal form, which is a relatively simple form of propositions, is one such standard form. All propositions can be expressed in clausal form.

An Overview of Logic Programming

Languages used for logic programming are called declarative languages, because programs written in them consist of declarations rather than assignments and control flow statements. These declarations are actually statements, or propositions, in symbolic logic. One of the essential characteristics of logic programming languages is their semantics, which is called declarative semantics. The basic concept of this semantics is that there is a simple way to determine the meaning of each statement, and it does not depend on how the statement might be used to solve a problem. Declarative semantics is considerably simpler than the semantics of the imperative languages.

Programming in a logic programming language is nonprocedural. Programs in such languages do not state exactly how a result is to be computed but rather describe the form of the result. The difference is that we assume the computer system can somehow determine how the result is to be computed. What is needed to provide this capability for logic programming languages is a concise means of supplying the computer with both the relevant information and a method of inference for computing desired results.

The Origins of Prolog

Alain Colmerauer and Phillippe Roussel at the University of Aix-Marseille, with some assistance from Robert Kowalski at the University of Edinburgh, developed the fundamental design of Prolog. The collaboration between the University of Aix-Marseille and the University of Edinburgh continued until the mid-1970s. Since then, research on the development and use of the language has progressed independently at those two locations, resulting in, among other things, two syntactically different dialects of Prolog.

After a decade of effort, the FGCS project was quietly dropped. Despite the great assumed potential of logic programming and Prolog, little of great significance had been discovered. This led to the decline in the interest in and use of Prolog, although it still has its applications and proponents.

The Basic Elements of Prolog

There are now a number of different dialects of Prolog. These can be grouped into several categories: those that grew from the Marseille group, those that came from the Edinburgh group, and some dialects that have been developed for microcomputers, such as micro-Prolog, which is described by Clark and McCabe.

Terms

A Prolog term is a constant, a variable, or a structure. A constant is either an atom or an integer. Atoms are the symbolic values of Prolog and are similar to their counterparts in LISP. In particular, an atom is either a string of letters, digits, and underscores that begins with a lowercase letter or a string of any printable ASCII characters delimited by apostrophes.

A variable is any string of letters, digits, and underscores that begins with an uppercase letter or an underscore ( _ ). Variables are not bound to types by declarations. The binding of a value, and thus a type, to a variable is called an instantiation. Instantiation occurs only in the resolution process. A variable that has not been assigned a value is called uninstantiated. Instantiations last only as long as it takes to satisfy one complete goal, which involves the proof or disproof of one proposition. Prolog variables are only distant relatives, in terms of both semantics and use, to the variables in the imperative languages. The last kind of term is called a structure. Structures represent the atomic propositions of predicate calculus, and their general form is the same.

Fact Statements

Our discussion of Prolog statements begins with those statements used to construct the hypotheses, or database of assumed information—the statements from which new information can be inferred.

Rule Statements

Rule Statements are also used for hypotheses. The other basic form of Prolog statement for constructing the database corresponds to a headed Horn clause. This form can be related to a known theorem in mathematics from which a conclusion can be drawn if the set of given conditions is satisfied.

The right side is the antecedent, or if part, and the left side is the consequent, or then part. If the antecedent of a Prolog statement is true, then the consequent of the statement must also be true. Because they are Horn clauses, the consequent of a Prolog statement is a single term, while the antecedent can be either a single term or a conjunction.

Goal Statements

In Prolog, these propositions are called goals, or queries. The syntactic form of Prolog goal statements is identical to that of headless Horn clauses. Goal Statements are used for theorem proving, theorem is in form of proposition that we want system to prove or disprove – goal statement. 

The Inferencing Process of Prolog

Queries are called goals. When a goal is a compound proposition, each of the facts (structures) is called a subgoal. To prove that a goal is true, the inferencing process must find a chain of inference rules and/or facts in the database that connect the goal to one or more facts in the database.

When goal has more than one subgoal, can use either

  • Depth-first search: find a complete proof for the first subgoal before working on others
  • Breadth-first search: work on all subgoals in parallel

Backtracking is a condition with a goal with multiple subgoals, if fail to show truth of one of subgoals, reconsider previous subgoal to find an alternative solution.

There are two opposite approaches to attempting to match a given goal to a fact in the database. The system can begin with the facts and rules of the database and attempt to find a sequence of matches that lead to the goal. This approach is called bottom-up resolution, or forward chaining. The alternative is to begin with the goal and attempt to find a sequence of matching propositions that lead to some set of original facts in the database. This approach is called top-down resolution, or backward chaining.

Simple Arithmetic

Prolog supports integer variables and integer arithmetic.  is operator: takes an arithmetic expression as right operand and variable as left operand. Example :

It is instructive to take an operational look at how a Prolog system produces results. Prolog has a built-in structure named trace that displays the instantiations of values to variables at each step during the attempt to satisfy a given goal. trace is used to understand and debug Prolog programs.

List Structures

List is a sequence of any number of elements. Elements can be atoms, atomic propositions, or other terms (including other lists)

Append example :

Deficiencies of Prolog

Prolog, for reasons of efficiency, allows the user to control the ordering of pattern matching during resolution. In a pure logic programming environment, the order of attempted matches that take place during resolution is nondeterministic, and all matches could be attempted concurrently. However, because Prolog always matches in the same order, starting at the beginning of the database and at the left end of a given goal, the user can profoundly affect efficiency by ordering the database statements to optimize a particular application. For example, if the user knows that certain rules are much more likely to succeed than the others during a particular “execution,” then the program can be made more efficient by placing those rules first in the database.

Applications of Logic Programming

Relational database management systems

Relational database management systems (RDBMSs) store data in the form of tables. Queries on such databases are often stated in Structured Query Language (SQL). SQL is nonprocedural in the same sense that logic programming is nonprocedural. The user does not describe how to retrieve the answer; rather, he or she describes only the characteristics of the answer. The connection between logic programming and RDBMSs should be obvious. Simple tables of information can be described by Prolog structures, and relationships between tables can be conveniently and easily described by Prolog rules. The retrieval process is inherent in the resolution operation. The goal statements of Prolog provide the queries for the RDBMS. Logic programming is thus a natural match to the needs of implementing an RDBMS.

Expert systems

Expert systems are computer systems designed to emulate human expertise in some particular domain. They consist of a database of facts, an inferencing process, some heuristics about the domain, and some friendly human interface that makes the system appear much like an expert human consultant. In addition to their initial knowledge base, which is provided by a human expert, expert systems learn from the process of being used, so their databases must be capable of growing dynamically. Also, an expert system should include the capability of interrogating the user to get additional information when it determines that such information is needed.

Natural language processing

Certain kinds of natural-language processing can be done with logic programming. In particular, natural-language interfaces to computer software systems, such as intelligent databases and other intelligent knowledge-based systems, can be conveniently done with logic programming. For describing language syntax, forms of logic programming have been found to be equivalent to context-free grammars. Proof procedures in logic programming systems have been found to be equivalent to certain parsing strategies.

Read more
Dec
29
2017
0

Session XII : Functional Programming Language

In this Session XII : Functional Programming Language, there are 7 subtopics:

  • Introduction
  • Mathematical Functions
  • Fundamentals of Functional Programming Languages
  • The First Functional Programming Language: LISP
  • Introduction to Scheme
  • Common LISP
  • Comparison of Functional and Imperative Languages

Introduction

The design of the imperative languages is based directly on the von Neumann architecture  as discussed in Chapter 1. Imperative languages can be thought of collectively as a progression of developments to improve the basic model, which was Fortran I. All have been designed to make efficient use of von Neumann architecture computers. Although the imperative style of programming has been found acceptable by most programmers, its heavy reliance on the underlying architecture is thought by some to be an unnecessary restriction on the alternative approaches to software development. Other bases for language design exist, some of them oriented more to particular programming paradigms or methodologies than to efficient execution on a particular computer architecture. Thus far, however, only a relatively small minority of programs have been written in nonimperative languages. The functional programming paradigm, which is based on mathematical functions, is the design basis of the most important nonimperative styles of languages. This style of programming is supported by functional programming languages.

Mathematical Functions

A mathematical function is a mapping of members of one set, called the domain set, to another set, called the range set. A function definition specifies the domain and range sets, either explicitly or implicitly, along with the mapping. The mapping is described by an expression or, in some cases, by a table. Functions are often applied to a particular element of the domain set, given as a parameter to the function.

Lambda Expression

A lambda expression specifies the parameters and the mapping of a function. The lambda expression is the function itself, which is nameless

Lambda expressions describe nameless functions and are applied to parameter(s) by placing the parameter(s) after the expression

Functional Forms

A higher-order function, or functional form, is one that either takes one or more functions as parameters or yields a function as its result, or both. One common kind of functional form is function composition, which has two functional parameters and yields a function whose value is the first actual parameter function applied to the result of the second.

Function Composition

Function Composition is a functional form that takes two functions as parameters and yields a function whose value is the first actual parameter function applied to the application of the second.

Apply-to-all

Apply-to-all is a functional form that takes a single function as a parameter and yields a list of values obtained by applying the given function to each element of a list of parameters

Fundamentals of Functional Programming Languages

The objective of the design of a functional programming language is to mimic mathematical functions to the greatest extent possible. This results in an approach to problem solving that is fundamentally different from approaches used with imperative languages. In an imperative language, an expression is evaluated and the result is stored in a memory location, which is represented as a variable in a program. This is the purpose of assignment statements. This necessary attention to memory cells, whose values represent the state of the program, results in a relatively low-level programming methodology.

A purely functional programming language does not use variables or assignment statements, thus freeing the programmer from concerns related to the memory cells, or state, of the program. Without variables, iterative constructs are not possible, for they are controlled by variables. Repetition must be specified with recursion rather than with iteration. Programs are function definitions and function application specifications, and executions consist of evaluating function applications. Without variables, the execution of a purely functional program has no state in the sense of operational and denotational semantics. The execution of a function always produces the same result when given the same parameters. This feature is called referential transparency. It makes the semantics of purely functional languages far simpler than the semantics of the imperative languages (and the functional languages that include imperative features). It also makes testing easier, because each function can be tested separately, without any concern for its context.

The first functional programming language, LISP, uses a syntactic form, for both data and code, that is very different from that of the imperative languages. However, many functional languages designed later use syntax for their code that is similar to that of the imperative languages

The First Functional Programming Language: LISP

There were only two categories of data objects in the original LISP: atoms and lists. List elements are pairs, where the first part is the data of the element, which is a pointer to either an atom or a nested list. The second part of a pair can be a pointer to an atom, a pointer to another element, or the empty list. Elements are linked together in lists with the second parts. Atoms and lists are not types in the sense that imperative languages have types. In fact, the original LISP was a typeless language. Atoms are either symbols, in the form of identifiers, or numeric literals. Lambda notation is used to specify functions and function definitions. Function applications and data have the same form. The first LISP interpreter appeared only as a  demonstration of the universality of the computational capabilities of the notation

Introduction to Scheme

The Scheme language, which is a dialect of LISP, was developed at MIT. It is characterized by its small size, its exclusive use of static scoping, and its treatment of functions as first-class entities. As first-class entities, Scheme functions can be the values of expressions, elements of lists, passed as parameters, and returned from functions.

A Scheme interpreter in interactive mode is an infinite read-evaluate-print loop (often abbreviated as REPL). It repeatedly reads an expression typed by the user (in the form of a list), interprets the expression, and displays the resulting value. This form of interpreter is also used by Ruby and Python. Expressions are interpreted by the function EVAL. Literals evaluate to themselves. So, if you type a number to the interpreter, it simply displays the number. A Scheme program is a collection of function definitions. Consequently, knowing how to define these functions is a prerequisite to writing the simplest program. In Scheme, a nameless function actually includes the word LAMBDA, and is called a lambda expression.

Scheme includes a few simple output functions, but when used with the interactive interpreter, most output from Scheme programs is the normal output from the interpreter, displaying the results of applying EVAL to top-level functions. Scheme includes a formatted output function, PRINTF, which is similar to the printf function of C.

A predicate function is one that returns a Boolean value (some representation of either true or false). Scheme includes a collection of predicate functions for numeric data. When a list is interpreted as a Boolean, any nonempty list evaluates to true; the empty list evaluates to false. This is similar to the interpretation of integers in C as Boolean values; zero evaluates to false and any nonzero value evaluates to true. Scheme uses three different constructs for control flow: one similar to the selection construct of the imperative languages and two based on the evaluation control used in mathematical functions.

Scheme programs are interpreted by the function application function, EVAL. When applied to a primitive function, EVAL first evaluates the parameters of the given function. This action is necessary when the actual parameters in a function call are themselves function calls, which is frequently the case. In some calls, however, the parameters are data elements rather than function references. When a parameter is not a function reference, it obviously should not be evaluated.

Common LISP

Common LISP was created in an effort to combine the features of several early 1980s dialects of LISP, including Scheme, into a single language. Being something of a union of languages, it is quite large and complex, similar in these regards to C++ and C#. Its basis, however, is the original LISP, so its syntax, primitive functions, and fundamental nature come from that language.

The list of features of Common LISP is long: a large number of data types and structures, including records, arrays, complex numbers, and character strings; powerful input and output operations; and a form of packages for modularizing collections of functions and data, and also for providing access control. Common LISP includes several imperative constructs, as well as some mutable types. Recognizing the occasional flexibility provided by dynamic scoping, as well as the simplicity of static scoping, Common LISP allows both. The default scoping for variables is static, but by declaring a variable to be “special,” that variable becomes dynamically scoped. Macros are often used in Common LISP to extend the language. In fact, some of the predefined functions are actually macros. For example, DOLIST, which takes two parameters, a variable and a list, is a macro.

LISP implementations have a front end called the reader that transforms the text of LISP programs into a code representation. Then, the macro calls in the code representation are expanded into code representations. The output of this step is then either interpreted or compiled into the machine language of the host computer, or perhaps into an intermediate code than can be interpreted. There is a special kind of macro, named reader macros or read macros, that are expanded during the reader phase of a LISP language processor. A reader macro expands a specific character into a string of LISP code. For example, the apostrophe in LISP is a read macro that expands to a call to QUOTE. Users can define their own reader macros to create other shorthand constructs.

Comparison of Functional and Imperative Languages

Some in the functional programming community have claimed that the use of functional programming results in an order-of-magnitude increase in productivity, largely due to functional programs being claimed to be only 10 percent as large as their imperative counterparts. While such numbers have been actually shown for certain problem areas, for other problem areas, functional programs are more like 25 percent as large as imperative solutions to the same problems. These factors allow proponents of functional programming to claim productivity advantages over imperative programming of 4 to 10 times. However, program size alone is not necessarily a good measure of productivity. Certainly not all lines of source code have equal complexity, nor do they take the same amount of time to produce. In fact, because of the necessity of dealing with variables, imperative programs have many trivially simple lines for initializing and making small changes to variables. Execution efficiency is another basis for comparison. When functional programs are interpreted, they are of course much slower than their compiled imperative counterparts. However, there are now compilers for most functional languages, so that execution speed disparities between functional languages and compiled imperative languages are no longer so great. One might be tempted to say that because functional programs are significantly smaller than equivalent imperative programs, they should execute much faster than the imperative programs. However, this often is not the case, because of a collection of language characteristics of the functional languages, such as lazy evaluation, that have a negative impact on execution efficiency. Considering the relative efficiency of functional and imperative programs, it is reasonable to estimate that an average functional program will execute in about twice the time of its imperative counterpart.

One simple factor that strongly affects the complexity of imperative, or procedural programming, is the necessary attention of the programmer to the state of the program at each step of its development. In a large program, the state of the program is a large number of values (for the large number of program variables). In pure functional programming, there is no state; hence, no need to devote attention to keeping it in mind. It is not a simple matter to determine precisely why functional languages have not attained greater popularity. The inefficiency of the early implementations was clearly a factor then, and it is likely that at least some contemporary imperative programmers still believe that programs written in functional languages are slow. In addition, the vast majority of programmers learn programming using imperative languages, which makes functional programs appear to them to be strange and difficult to understand.

Read more
Dec
29
2017
0

Session XI : Exception Handling and Event Handling

In this Session XI : Exception Handling and Event Handling there are 5 subtopics:

  • Introduction to Exception Handling
  • Exception Handling in C++
  • Introduction to Event Handling
  • Event Handling with Java
  • Event Handling in C#

Introduction to Exception Handling

Most computer hardware systems are capable of detecting certain run-time error conditions, such as floating-point overflow. Early programming languages were designed and implemented in such a way that the user program could neither detect nor attempt to deal with such errors. In these languages, the occurrence of such an error simply causes the program to be terminated and control to be transferred to the operating system. The typical operating system reaction to a run-time error is to display a diagnostic message, which may be meaningful and therefore useful, or highly cryptic. After displaying the message, the program is terminated. In a language with exception handling, programs are allowed to trap some exceptions, thereby providing the possibility of fixing the problem and continuing

Exception is any unusual event, erroneous or not, that is detectable by either hardware or software and that may require special processing. The special processing that may be required when an exception is detected is called exception handling. This processing is done by a code unit or segment called an exception handler.  An exception is raised when its associated event occurs. A language that does not have exception handling capabilities can still define, detect, raise, and handle exceptions (user defined, software detected)

Advantages of Built-in Exception Handling:

  • Error detection code is tedious to write and it clutters the program
  • Exception handling encourages programmers to consider many different possible errors
  • Exception propagation allows a high level of reuse of exception handling code

Exception Handling Control Flow

Exception Handling in C++

The exception handling of C++ was accepted by the ANSI C++ standardization committee in 1990 and subsequently found its way into C++ implementations. The design is based in part on the exception handling of CLU, Ada, and ML.

C++ uses a special construct that is introduced with the reserved word try for this purpose. A try construct includes a compound statement called the try clause and a list of exception handlers. The compound statement defines the scope of the following handlers.

 

Each catch function is an exception handler. A catch function can have only a single formal parameter, which is similar to a formal parameter in a function definition in C++, including the possibility of it being an ellipsis (. . .). A handler with an ellipsis formal parameter is the catch-all handler; it is enacted for any raised exception if no appropriate handler was found.

The formal parameter need not have a variable, it can be simply a type name to distinguish the handler it is in from others. The formal parameter can be used to transfer information to the handler and be an ellipsis, in which case it handles all exceptions not yet handled.

C++ design is quite different: There are no predefined hardware-detectable exceptions that can be handled by the user, and exceptions are not named. Exceptions are connected to handlers through a parameter type in which the formal parameter may be omitted. The type of the formal parameter of a handler determines the condition under which it is called but may have nothing whatsoever to do with the nature of the raised exception. Therefore, the use of predefined types for exceptions certainly does not promote readability. It is much better to define classes for exceptions with meaningful names in a meaningful hierarchy that can be used for defining exceptions. The exception parameter provides a way to pass information about an exception to the exception handler.

Introduction to Event Handling

Event handling is similar to exception handling. In both cases, the handlers are implicitly called by the occurrence of something, either an exception or an event. While exceptions can be created either explicitly by user code or implicitly by hardware or a software interpreter, events are created by external actions, such as user interactions through a graphical user interface (GUI). In this section, the fundamentals of event handling, which are substantially less complex than those of exception handling, are introduced. An event is a notification that something specific has occurred, such as a mouse click on a graphical button, the event handler is a segment of code that is executed in response to an event

Event Handling with Java

Java Swing GUI Components

Text box is an object of class JTextField. Radio button is an object of class JRadioButton. Applet’s display is a frame, a multilayered structure. Content pane is one layer, where applets put output. GUI components can be placed in a frame. Layout manager objects are used to control the placement of components.

The Java Event Model

When a user interacts with a GUI component, for example by clicking a button, the component creates an event object and calls an event handler through an object called an event listener, passing the event object. The event handler provides the associated actions. GUI components are event generators; they generate events. In Java, events are connected to event handlers through event listeners. Event listeners are connected to event generators through event listener registration. Listener registration is done with a method of the class that implements the listener interface, as described later in this section. Only event listeners that are registered for a specific event are notified when that event occurs.

One class of events is ItemEvent, which is associated with the event of clicking a checkbox, a radio button, or a list item. The ItemListener interface prescribes a method, itemStateChanged, which is a handler for ItemEvent events, the listener is created with addItemListener.

Event Handling in C#

Event handling in C# (and in the other .NET languages) is similar to that of Java. .NET provides two approaches to creating GUIs in applications, the original Windows Forms and the more recent Windows Presentation Foundation.

Using Windows Forms, a C# application that constructs a GUI is created by subclassing the Form predefined class, which is defined in the System.Windows .Forms namespace. This class implicitly provides a window to contain our components. There is no need to build frames or panels explicitly. Text can be placed in a Label object and radio buttons are objects of the RadioButton class. The size of a Label object is not explicitly specified in the constructor; rather it can be specified by setting the AutoSize data member of the Label object to true, which sets the size according to what is placed in it. Components can be placed at a particular location in the window by assigning a new Point object to the Location property of the component. The Point class is defined in the System.Drawing namespace. The Point constructor takes two parameters, which are the coordinates of the object in pixels.

An event handler can have any name, a radio button is tested with the Boolean Checked property of the button and to register an event, a new EventHandler object must be created and added to  the predefined delegate for the event

When a radio button changes from unchecked to checked, the CheckedChanged event is raised, the associated delegate is referenced by the name of the event.

 

Read more
Dec
29
2017
0

Session X : Concurrency

In this Session X : Concurrency, there are 5 subtopics:

  • Introduction
  • Introduction to Subprogram-Level Concurrency
  • Semaphores
  • Monitors

Introduction

Concurrency in software execution can occur at four different levels: instruction level (executing two or more machine instructions simultaneously), statement level (executing two or more high-level language statements simultaneously), unit level (executing two or more subprogram units simultaneously), and program level (executing two or more programs simultaneously).

 

Introduction to Subprogram-Level Concurrency

A task is a unit of a program, similar to a subprogram, that can be in concurrent execution with other units of the same program. Each task in a program can support one thread of control. Tasks are sometimes called processes. In some languages, for example Java and C#, certain methods serve as tasks. Such methods are executed in objects called threads. Tasks usually work together and may be implicitly started, when a program unit starts the execution of a task, it is not necessarily suspended, and when a task’s execution is completed, control may not return to the caller.

There are two general categories of tasks:

  • Heavyweight tasks execute in their own address space
  • Lightweight tasks all run in the same address space – more efficient

A task is disjoint if it does not communicate with or affect the execution of any other task in the program in any way

Synchronization is a mechanism that controls the order in which tasks execute. Two kinds of synchronization are required when tasks share data: cooperation and competition. Cooperation synchronization is required between task A and task B when task A must wait for task B to complete some specific activity before task A can begin or continue its execution. Competition synchronization is required between two tasks when both require the use of some resource that cannot be simultaneously used.

A run-time system program called a scheduler manages the sharing of processors among the tasks, If there were never any interruptions and tasks all had the same priority, the scheduler could simply give each task a time slice, such as 0.1 second, and when a task’s turn came, the scheduler could let it execute on a processor for that amount of time.

Task Execution States :

  1. New – created but not yet started
  2. Ready – ready to run but not currently running (no available processor)
  3. Running
  4. Blocked – has been running, but cannot now continue (usually waiting for some event to occur)
  5. Dead – no longer active in any sense

Semaphores

A semaphore is a simple mechanism that can be used to provide synchronization of tasks. Although semaphores are an early approach to providing synchronization, they are still used, both in contemporary languages and in library-based concurrency support systems. Semaphores can be used to implement guards on the code that accesses shared data structures and to provide both competition and cooperation synchronization. Semaphores have only two operations, wait and release .

Cooperation Synchronization with Semaphores

For Example: A shared buffer

The buffer is implemented as an ADT with the operations DEPOSIT and FETCH as the only ways to access the buffer, it uses two semaphores for cooperation: emptyspots and fullspots. The semaphore counters are used to store the numbers of empty spots and full spots in the buffer. DEPOSIT must first check emptyspots to see if there is room in the buffer. If there is room, the counter of emptyspots is decremented and the value is inserted and if there is no room, the caller is stored in the queue of emptyspots

When DEPOSIT is finished, it must increment the counter of fullspots , FETCH must first check fullspots to see if there is a value. If there is a full spot, the counter of fullspots is decremented and the value is removed. If there are no values in the buffer, the caller must be placed in the queue of fullspots, when FETCH is finished, it increments the counter of emptyspots

The operations of FETCH and DEPOSIT on the semaphores are accomplished through two semaphore operations named wait and release

Competition Synchronization

This semaphore need not count anything but can simply indicate with its counter whether the buffer is currently being used. The wait statement allows the access only if the semaphore’s counter has the value 1, which indicates that the shared buffer is not currently being accessed. If the semaphore’s counter has a value of 0, there is a current access taking place, and the task is placed in the queue of the semaphore. Notice that the semaphore’s counter must be initialized to 1. The queues of semaphores must always be initialized to empty before use of the queue can begin. A semaphore that requires only a binary-valued counter, like the one used to provide competition synchronization in the following example, is called a binary semaphore.

Monitors

One solution to some of the problems of semaphores in a concurrent environment is to encapsulate shared data structures with their operations and hide their representations—that is, to make shared data structures abstract data types with some special restrictions. This solution can provide competition synchronization without semaphores by transferring responsibility for synchronization to the run-time system.

Competition Synchronization

One of the most important features of monitors is that shared data is resident in the monitor rather than in any of the client units. The programmer does not synchronize mutually exclusive access to shared data through the use of semaphores or other mechanisms. Because the access mechanisms are part of the monitor, implementation of a monitor can be made to guarantee synchronized access by allowing only one access at a time. Calls to monitor procedures are implicitly blocked and stored in a queue if the monitor is busy at the time of the call.

Cooperation Synchronization

Although mutually exclusive access to shared data is intrinsic with a monitor, cooperation between processes is still the task of the programmer. In particular, the programmer must guarantee that a shared buffer does not experience underflow or overflow. Different languages provide different ways of programming cooperation synchronization, all of which are related to semaphores.

Read more
Dec
28
2017
0

Session IX : Object Oriented Programming

In this Session IX : Object Oriented Programming there are 5 sub topics:

  • Introduction
  • Object-Oriented Programming
  • Design Issues for Object-Oriented Languages
  • Support for Object-Oriented Programming in C++
  • Implementation of Object-Oriented Constructs

Introduction

Languages that support object-oriented programming now are firmly entrenched in the mainstream. From COBOL to LISP, including virtually every language in between, dialects that support object-oriented programming have appeared. C++, Objective-C, and Ada 95 support procedural and data-oriented programming, in addition to object-oriented programming. Newer languages do not support other paradigms but use their imperative structures (e.g., Java and C#), some are pure OOP language (e.g., Smalltalk & Ruby)

Object-Oriented Programming

There are three major language features:

–Abstract data types : already discussed in session VIII

–Inheritance

–Polymorphism

Inheritance

Inheritance is the central theme in OOP. Inheritance offers a solution to both the modification problem posed by abstract data type reuse and the program organization problem. If a new abstract data type can inherit the data and functionality of some existing type, and is also allowed to modify some of those entities and add new entities, reuse is greatly facilitated without requiring changes to the reused abstract data type. Programmers can begin with an existing abstract data type and design a modified descendant of it to fit a new problem requirement. Furthermore, inheritance provides a framework for the definition of hierarchies of related classes that can reflect the descendant relationships in the problem space.

  • Inheritance can be complicated by access controls to encapsulated entities
    • A class can hide entities from its subclasses
    • A class can hide entities from its clients
    • A class can also hide entities for its clients while allowing its subclasses to see them
  • Besides inheriting methods as is, a class can modify an inherited method
    • The new one overrides the inherited one
    • The method in the parent is overriden

There are 3 ways a class can differ from its parent:

  1. The parent class can define some of its variables or methods to have private access, which means they will not be visible in the subclass
  2. The subclass can add variables and/or methods to those inherited from the parent
  3. The subclass can modify the behavior of one or more of its inherited methods.

There are two kinds of variables in a class:

  • Class variables – one/class
  • Instance variables – one/object

There are two kinds of methods in a class:

  • Class methods – accept messages to the class
  • Instance methods – accept messages to objects

One disadvantage of inheritance for reuse is it creates interdependencies among classes that complicate maintenance

Dynamic Binding (Polymorphism)

Polymorphism is a natural part of any object-oriented language that is statically typed. In a sense, polymorphism makes a statically typed language a little bit dynamically typed, where the little bit is in some bindings of method calls to methods. The type of a polymorphic variable is indeed dynamic. A polymorphic variable can be defined in a class that is able to reference (or point to) objects of the class and objects of any of its descendants. When a class hierarchy includes classes that override methods and such methods are called through a polymorphic variable, the binding to the correct method will be dynamic. One purpose of dynamic binding is to allow software systems to be more easily extended during both development and maintenance.

Design Issues for Object-Oriented Languages

  • The Exclusivity of Objects

A language designer who is totally committed to the object model of computation designs an object system that subsumes all other concepts of type. Everything, from a simple scalar integer to a complete software system, is an object in this mind-set. The advantage of this choice is the elegance and pure uniformity of the language and its use. The primary disadvantage is that simple operations must be done through the message-passing process, which often makes them slower than similar operations in an imperative model, where single machine instructions implement such simple operations

  • Are Subclasses Subtypes?

The issue here is relatively simple: Does an “is-a” relationship hold between a derived class and its parent class? From a purely semantics point of view, if a derived class is a parent class, then objects of the derived class must expose all of the members that are exposed by objects of the parent class. At a less abstract level, an is-a relationship guarantees that in a client a variable of the derived class type could appear anywhere a variable of the parent class type was legal, without causing a type error. Moreover, the derived class objects should be behaviorally equivalent to the parent class objects.

  • Single and Multiple Inheritance

Another simple issue is: Does the language allow multiple inheritance (in addition to single inheritance)? Maybe it’s not so simple. The purpose of multiple inheritance is to allow a new class to inherit from two or more classes. Because multiple inheritance is sometimes highly useful, why would a language designer not include it? The reasons lie in two categories: complexity and efficiency. The additional complexity is illustrated by several problems. First, note that if a class has two unrelated parent classes and neither defines a name that is defined in the other, there is no problem.

  • Object Allocation and Deallocation

There are two design questions concerning the allocation and deallocation of objects. The first of these is the place from which objects are allocated. If they behave like the abstract data types, then perhaps they can be allocated from anywhere. This means they could be allocated from the run-time stack or explicitly created on the heap with an operator or function, such as new. If they are all heap dynamic, there is the advantage of having a uniform method of creation and access through pointer or reference variables. This design simplifies the assignment operation for objects, making it in all cases only a pointer or reference value change. It also allows references to objects to be implicitly dereferenced, simplifying the access syntax

  • Dynamic and Static Binding

The alternative is to allow the user to specify whether a specific binding is to be dynamic or static. The advantage of this is that static bindings are faster.

  • Nested Classes

One of the primary motivations for nesting class definitions is information hiding. If a new class is needed by only one class, there is no reason to define it so it can be seen by other classes. In this situation, the new class can be nested inside the class that uses it. In some cases, the new class is nested inside a subprogram, rather than directly in another class. The class in which the new class is nested is called the nesting class. The most obvious design issues associated with class nesting are related to visibility.

  • Initialization of Objects

The initialization issue is whether and how objects are initialized to values when they are created. This is more complicated than may be first thought. The first question is whether objects must be initialized manually or through some implicit mechanism.

Support for Object-Oriented Programming in C++

To main backward compatibility with C, C++ retains the type system of C and adds classes to it. Therefore, C++ has both traditional imperative-language types and the class structure of an object-oriented language. It supports methods, as well as functions that are not related to specific classes. This makes it a hybrid language, supporting both procedural programming and object- oriented programming.

A C++ class can be derived from an existing class, which is then its parent, or base, class. Unlike Smalltalk and most other languages that support object- oriented programming, a C++ class can also be stand-alone, without a superclass. All C++ objects must be initialized before they are used. Therefore, all C++ classes include at least one constructor method that initializes the data members of the new object. Constructor methods are implicitly called when an object is created. If any of the data members are pointers to heap-allocated data, the constructor allocates that storage.

All of the member functions we have defined thus far are statically bound; that is, a call to one of them is statically bound to a function definition. A C++ object could be manipulated through a value variable, rather than a pointer or a reference. (Such an object would be static or stack dynamic.) However, in that case, the object’s type is known and static, so dynamic binding is not needed. On the other hand, a pointer variable that has the type of a base class can be used to point to any heap-dynamic objects of any class publicly derived from that base class, making it a polymorphic variable. Publicly derived subclasses are subtypes if none of the members of the base class are private. Privately derived subclasses are never subtypes. A pointer to a base class cannot be used to reference a method in a subclass that is not a subtype.

Implementation of Object-Oriented Constructs

There are at least two parts of language support for object-oriented programming that pose interesting questions for language implementers: storage structures for instance variables and the dynamic bindings of messages to methods.

  • Storage structures for instance variablesIn C++, classes are defined as extensions of C’s record structures—structs. This similarity suggests a storage structure for the instance variables of class instances—that of a record. This form of this structure is called a class instance record (CIR). The structure of a CIR is static, so it is built at compile time and used as a template for the creation of the data of class instances. Every class has its own CIR. When a derivation takes place, the CIR for the subclass is a copy of that of the parent class, with entries for the new instance variables added at the end.
  • Dynamic binding of messages to methodsMethods in a class that are statically bound need not be involved in the CIR for the class. However, methods that will be dynamically bound must have entries in this structure. Such entries could simply have a pointer to the code of the method, which must be set at object creation time. Calls to a method could then be connected to the corresponding code through this pointer in the CIR. The drawback to this technique is that every instance would need to store pointers to all dynamically bound methods that could be called from the instance.
Read more
Dec
28
2017
0

Session VIII: Abstract Data Type

In this Session VIII : Abstract Data Type, there are 6 sub topics:

  • The Concept of Abstraction
  • Introduction to Data Abstraction
  • Language Examples
  • Parameterized Abstract Data Types
  • Encapsulation Constructs
  • Naming Encapsulations

The Concept of Abstraction

An abstraction is a view or representation of an entity that includes only the most significant attributes. In a general sense, abstraction allows one to collect instances of entities into groups in which their common attributes need not be considered. In the world of programming languages, abstraction is a weapon against the complexity of programming; its purpose is to simplify the programming process. It is an effective weapon because it allows programmers to focus on essential attributes, while ignoring subordinate attributes. The two fundamental kinds of abstraction in contemporary programming languages are process abstraction and data abstraction.

Introduction to Data Abstraction

The evolution of data abstraction began in 1960 with the first version of COBOL, which included the record data structure.1 The C-based languages have structs, which are also records. An abstract data type is a data structure, in the form of a record, but which includes subprograms that manipulate its data.

Syntactically, an abstract data type is an enclosure that includes only the data representation of one specific data type and the subprograms that provide the operations for that type. Through access controls, unnecessary details of the type can be hidden from units outside the enclosure that use the type. Program units that use an abstract data type can declare variables of that type, even though the actual representation is hidden from them. An instance of an abstract data type is called an object.

Here are some advantages of ADT :

–Reliability–by hiding the data representations, user code cannot directly access objects of the type or depend on the representation, allowing the representation to be changed without affecting user code

–Reduces the range of code and variables of which the programmer must be aware

–Name conflicts are less likely

–Provides a method of program organization

–Aids modifiability (everything associated with a data structure is together)

–Separate compilation

Language Examples

ADT in Ada

Ada provides an encapsulation construct that can be used to define a single abstract data type, including the ability to hide its representation. Ada 83 was one of the first languages to offer full support for abstract data types.

ADT in C++

C++, which was first released in 1985, was created by adding features to C. The first important additions were those to support object-oriented programming. Because one of the primary components of object-oriented programming is abstract data types, C++ obviously is required to support them.

ADT in Java

Java support for abstract data types is similar to that of C++. There are, however, a few important differences. All objects are allocated from the heap and accessed through reference variables. Methods in Java must be defined completely in a class. A method body must appear with its corresponding method header. Therefore, a Java abstract data type is both declared and defined in a single syntactic unit. A Java compiler can inline any method that is not overridden. Definitions are hidden from clients by declaring them to be private.

ADT in C#

Parameterized Abstract Data Types

It is often convenient to be able to parameterize abstract data types. For example, we should be able to design a stack abstract data type that can store any scalar type elements rather than be required to write a separate stack abstraction for every different scalar type. Note that this is only an issue for static typed languages. In a dynamic typed language like Ruby, any stack implicitly can store any type elements.

Classes can be somewhat generic by writing parameterized constructor functions

Example for Parameterized ADTs in C++

Encapsulation Constructs

When the size of a program reaches beyond a few thousand lines, two practical problems become evident. From the programmer’s point of view, having such a program appear as a single collection of subprograms or abstract data type definitions does not impose an adequate level of organization on the program to keep it intellectually manageable. The second practical problem for larger programs is recompilation.The obvious solution to these problems is to organize programs into collections of logically related code and data, each of which can be compiled without recompilation of the rest of the program. An encapsulation is such a collection. Encapsulations are often placed in libraries and made available for reuse in programs other than those for which they were written.

Naming Encapsulations

We have considered encapsulations to be syntactic containers for logically related software resources—in particular, abstract data types. The purpose of these encapsulations is to provide a way to organize programs into logical units for compilation. This allows parts of programs to be recompiled after isolated changes. There is another kind of encapsulation that is necessary for constructing large programs: a naming encapsulation. A naming encapsulation is used to create a new scope for names

C++ Namespaces

  • Can place each library in its own namespace and qualify names used outside with the namespace
  • C# also includes namespaces

Java Packages

  • Packages can contain more than one class definition; classes in a package are partial friends
  • Clients of a package can use fully qualified name or use the import declaration

Ruby classes are name encapsulations, but Ruby also has modules

Typically encapsulate collections of constants and methods

Modules cannot be instantiated or subclassed, and they cannot define variables

Methods defined in a module must include the module’s name

Access to the contents of a module is requested with the require method

Read more
Dec
26
2017
0

Session VII: Subprogram

In this Session VII : Subprogram, there are 11 sub topics :

  • Introduction
  • Fundamentals of Subprograms
  • Local Referencing Environments
  • Parameter-Passing Methods
  • Parameters That Are Subprograms
  • Calling Subprograms Indirectly
  • Overloaded Subprograms
  • Generic Subprograms
  • User-Defined Overloaded Operators
  • Closures
  • Coroutines

Introduction

Two fundamental abstraction facilities can be included in a programming language: process abstraction and data abstraction. In the early history of highlevel programming languages, only process abstraction was included. Process abstraction, in the form of subprograms, has been a central concept in all programming languages.

The first programmable computer, Babbage’s Analytical Engine, built in the 1840s, had the capability of reusing collections of instruction cards at several different places in a program. In a modern programming language, such a collection of statements is written as a subprogram. This reuse results in several different kinds of savings, primarily memory space and coding time. Such reuse is also an abstraction, for the details of the subprogram’s computation are replaced in a program by a statement that calls the subprogram. Instead of describing how some computation is to be done in a program, that description (the collection of statements in the subprogram) is enacted by a call statement, effectively abstracting away the details. This increases the readability of a program by emphasizing its logical structure while hiding the low-level details.

Fundamentals of Subprograms

Each subprogram has a single entry point. The calling program unit is suspended during the execution of the called subprogram, which implies that there is only one subprogram in execution at any given time, control always returns to the caller when the subprogram execution terminates.

A subprogram definition describes the interface to and the actions of the subprogram abstraction. A subprogram call is the explicit request that a specific subprogram be executed. A subprogram is said to be active if, after having been called, it has begun execution but has not yet completed that execution. A subprogram header, which is the first part of the definition, serves several purposes. First, it specifies that the following syntactic unit is a subprogram definition of some particular kind.1 In languages that have more than one kind of subprogram, the kind of the subprogram is usually specified with a special word. Second, if the subprogram is not anonymous, the header provides a name for the subprogram. Third, it may optionally specify a list of parameters.

Local Referencing Environments

Subprograms can define their own variables, thereby defining local referencing environments. Variables that are defined inside subprograms are called local variables, because their scope is usually the body of the subprogram in which they are defined.

Local variables can be stack-dynamic

  • Advantages
    • Support for recursion
    • Storage for locals is shared among some subprograms
  • Disadvantages
    • Allocation/de-allocation, initialization time
    • Indirect addressing
    • Subprograms cannot be history sensitive
  • Local variables can be static
    • Advantages and disadvantages are the opposite of those for stack-dynamic local variables

Parameter-Passing Methods

Semantics Models of Parameter Passing

Parameter-passing methods are the ways in which parameters are transmitted to and/or from called subprograms. First, we focus on the different semantics models of parameter-passing methods. Then, we discuss the various implementation models invented by language designers for these semantics models. Next, we survey the design choices of several languages and discuss the actual methods used to implement the implementation models. Finally, we consider the design considerations that face a language designer in choosing among the methods. Formal parameters are characterized by one of three distinct semantics models: (1) They can receive data from the corresponding actual parameter; (2) they can transmit data to the actual parameter; or (3) they can do both. These models are called in mode, out mode, and inout mode.

Implementation Models of Parameter Passing

A variety of models have been developed by language designers to guide the implementation of the three basic parameter transmission modes.

  • Two important considerations

–Efficiency

–One-way or two-way data transfer

  • But the above considerations are in conflict

–Good programming suggest limited access to variables, which means one-way whenever possible

–But pass-by-reference is more efficient to pass structures of significant size

Parameters That Are Subprograms

In programming, a number of situations occur that are most conveniently handled if subprogram names can be sent as parameters to other subprograms. One common example of these occurs when a subprogram must sample some mathematical function. For example, a subprogram that does numerical integration estimates the area under the graph of a function by sampling the function at a number of different points.

When such a subprogram is written, it should be usable for any given function; it should not need to be rewritten for every function that must be integrated. It is therefore natural that the name of a program function that evaluates the mathematical function to be integrated be sent to the integrating subprogram as a parameter. Although the idea is natural and seemingly simple, the details of how it works can be confusing. If only the transmission of the subprogram code was necessary, it could be done by passing a single pointer. However, two complications arise. First, there is the matter of type checking the parameters of the activations of the subprogram that was passed as a parameter.

In C and C++, functions cannot be passed as parameters, but pointers to functions can. The type of a pointer to a function includes the function’s protocol. Because the protocol includes all parameter types, such parameters can be completely type checked.

Fortran 95+ has a mechanism for providing types of parameters for subprograms that are passed as parameters, and they must be checked. The second complication with parameters that are subprograms appears only with languages that allow nested subprograms. The issue is what referencing environment for executing the passed subprogram should be used. There are three choices:

  • The environment of the call statement that enacts the passed subprogram (shallow binding)
  • The environment of the definition of the passed subprogram (deep binding)
  • The environment of the call statement that passed the subprogram as an actual parameter (ad hoc binding)

Calling Subprograms Indirectly

There are situations in which subprograms must be called indirectly. These most often occur when the specific subprogram to be called is not known until run time. The call to the subprogram is made through a pointer or reference to the subprogram, which has been set during execution before the call is made. The two most common applications of indirect subprogram calls are for event handling in graphical user interfaces, which are now part of nearly all Web applications, as well as many non-Web applications, and for callbacks, in which a subprogram is called and instructed to notify the caller when the called subprogram has completed its work.

The concept of calling subprograms indirectly is not a recently developed concept. C and C++ allow a program to define a pointer to a function, through which the function can be called. In C++, pointers to functions are typed according to the return type and parameter types of the function, so that such a pointer can point only at functions with one particular protocol.

Overloaded Subprograms

An overloaded operator is one that has multiple meanings. The meaning of a particular instance of an overloaded operator is determined by the types of its operands. For example, if the * operator has two floating-point operands in a Java program, it specifies floating-point multiplication. But if the same operator has two integer operands, it specifies integer multiplication.

An overloaded subprogram is a subprogram that has the same name as another subprogram in the same referencing environment. Every version of an overloaded subprogram must have a unique protocol; that is, it must be different from the others in the number, order, or types of its parameters, and possibly in its return type if it is a function. The meaning of a call to an overloaded subprogram is determined by the actual parameter list (and/or possibly the type of the returned value, in the case of a function). Although it is not necessary, overloaded subprograms usually implement the same process.

C++, Java, C#, and Ada include predefined overloaded subprograms. In Ada, the return type of an overloaded function can be used to disambiguate calls (thus two overloaded functions can have the same parameters). Ada, Java, C++, and C# allow users to write multiple versions of subprograms with the same name

Generic Subprograms

Software reuse can be an important contributor to software productivity. One way to increase the reusability of software is to lessen the need to create different subprograms that implement the same algorithm on different types of data. programs to sort four arrays that differ only in element type.

A polymorphic subprogram takes parameters of different types on different activations. Overloaded subprograms provide a particular kind of polymorphism called ad hoc polymorphism. Overloaded subprograms need not behave similarly. Languages that support object-oriented programming usually support subtype polymorphism. Subtype polymorphism means that a variable of type T can access any object of type T or any type derived from T.

A more general kind of polymorphism is provided by the methods of Python and Ruby. Recall that variables in these languages do not have types, so formal parameters do not have types. Therefore, a method will work for any type of actual parameter, as long as the operators used on the formal parameters in the method are defined.

A generic or polymorphic subprogram takes parameters of different types on different activations, overloaded subprograms provide ad hoc polymorphism. Subtype polymorphism means that a variable of type T can access any object of type T or any type derived from T (OOP languages). A subprogram that takes a generic parameter that is used in a type expression that describes the type of the parameters of the subprogram provides parametric polymorphism

User-Defined Overloaded Operators

Operators can be overloaded by the user in Ada, C++, Python, and Ruby. Suppose
that a Python class is developed to support complex numbers and arithmetic
operations on them. A complex number can be represented with two floatingpoint
values. The Complex class would have members for these two named
real and imag.

A Python example

def __add__ (self, second) :

  return Complex(self.real + second.real,

self.imag + second.imag)

Use: To compute x + y, x.__add__(y)

Closures

Defining a closure is a simple matter; a closure is a subprogram and the referencing environment where it was defined. The referencing environment is needed if the subprogram can be called from any arbitrary place in the program. Explaining a closure is not so simple.

If a static-scoped programming language does not allow nested subprograms, closures are not useful, so such languages do not support them. All of the variables in the referencing environment of a subprogram in such a language (its local variables and the global variables) are accessible, regardless of the place in the program where the subprogram is called.

When subprograms can be nested, in addition to locals and globals, the referencing environment of a subprogram can include variables defined in all enclosing subprograms. However, this is not an issue if the subprogram can be called only in places where all of the enclosing scopes are active and visible. It becomes an issue if a subprogram can be called elsewhere. This can happen if the subprogram can be passed as a parameter or assigned to a variable, thereby allowing it to be called from virtually anywhere in the program.

The referencing environment is needed if the subprogram can be called from any arbitrary place in the program, a static-scoped language that does not permit nested subprograms doesn’t need closures. Closures are only needed if a subprogram can access variables in nesting scopes and it can be called from anywhere. To support closures, an implementation may need to provide unlimited extent to some variables (because a subprogram may access a nonlocal variable that is normally no longer alive)

Coroutines

A coroutine is a special kind of subprogram. Rather than the master-slave relationship between a caller and a called subprogram that exists with conventional subprograms, caller and called coroutines are more equitable. In fact, the coroutine control mechanism is often called the symmetric unit control model.

Coroutines can have multiple entry points, which are controlled by the coroutines themselves. They also have the means to maintain their status between activations. This means that coroutines must be history sensitive and thus have static local variables. Secondary executions of a coroutine often begin at points other than its beginning. Because of this, the invocation of a coroutine is called a resume rather than a call.

Read more
Dec
26
2017
0

Session VI : Statement-Level Control Structures

In this Session VI : Statement-Level Control Structures , there are 2 sub topics :

  • Selection Statements
  • Iterative Statements

Computations in imperative-language programs are accomplished by evaluating expressions and assigning the resulting values to variables. However, there are few useful programs that consist entirely of assignment statements. At least two additional linguistic mechanisms are necessary to make the computations in programs flexible and powerful: some means of selecting among alternative control flow paths (of statement execution) and some means of causing the repeated execution of statements or sequences of statements. Statements that provide these kinds of capabilities are called control statements.

Selection Statements

A selection statement provides the means of choosing between two or more execution paths in a program. Such statements are fundamental and essential parts of all programming languages, as was proven by Böhm and Jacopini. Selection statements fall into two general categories: two-way and n-way, or multiple selection.

The general form is :

if control_expression

  then clause

  else clause

If the then reserved word or some other syntactic marker is not used to introduce the then clause, the control expression is placed in parentheses. In C89, C99, Python, and C++, the control expression can be arithmetic, in most other languages, the control expression must be Boolean, in many contemporary languages, the then and else clauses can be single statements or compound statements, in Perl, all clauses must be delimited by braces (they must be compound), in Fortran 95, Ada, Python, and Ruby, clauses are statement sequences .

Iterative Statement

Iterative Statement is the repeated execution of a statement or compound statement is accomplished either by iteration or recursion,  and it is one that causes a statement or collection of statements to be executed zero, one, or more times. An iterative statement is often called a loop

The body of an iterative statement is the collection of statements whose execution is controlled by the iteration statement. We use the term pretest to mean that the test for loop completion occurs before the loop body is executedand posttest to mean that it occurs after the loop body is executed. The iteration statement and the associated loop body together form an iteration statement. In addition to the primary iteration statements, we discuss an alternative form that is in a class by itself: user-defined iteration control.

Conclusion

We have described and discussed a variety of statement-level control structures. A brief evaluation now seems to be in order. First, we have the theoretical result that only sequence, selection, and pretest logical loops are absolutely required to express computations. This result has been used by those who wish to ban unconditional branching altogether. Of course, there are already sufficient practical problems with the goto to condemn it without also using a theoretical reason. One of the main legitimate needs for gotos—premature exits from loops—can be met with highly restricted branch statements, such as break.

Read more

Powered by WordPress. Kredit, Streaming Audio | Theme by TheBuckmaker.