Tnou Study Center

Blog Owner: Vignesh A





Search This Blog

Sunday, December 9, 2012

BCA 02 (C Programing and Data Structure) TNOU first year Assignment



BCA 02 (C Programing and Data Structure)
Answer all the Questions:
1. Explain Stack and Queue.
Answer:
Explanation of Stack:
The stack is an area of memory that holds all local variables and parameters used by any function, and remembers the order in which functions are called so that function returns occur correctly. Each time a function is called, its local variables and parameters are "pushed onto" the stack. When the function returns, these locals and parameters are "popped." Because of this, the size of a program's stack fluctuates constantly as the program is running, but it has some maximum size.
           

One way of describing the stack is as a last in, first out (LIFO) abstract data type and linear data structure. A stack can have any abstract data type as an element, but is characterized by two fundamental operations, called push and pop (or pull). The push operation adds a new item to the top of the stack, or initializes the stack if it is empty. If the stack is full and does not contain enough space to accept the given item, the stack is then considered to be in an overflow state.
A stack is a restricted data structure, because only a small number of operations are performed on it. The nature of the pop and push operations also means that stack elements have a natural order. Elements are removed from the stack in the reverse order to the order of their addition: therefore, the lower elements are those that have been on the stack the longest
                                   
 
Explanation of Queue:
In computer science, a queue is a particular kind of abstract data type or collection in which the entities in the collection are kept in order and the principal (or only) operations on the collection are the addition of entities to the rear terminal position, known as enqueue, and removal of entities from the front terminal position, known as dequeue. This makes the queue a First-In-First-Out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once an element is added, all elements that were added before have to be removed before the new element can be invoked. Often a peek or front operation is also implemented, returning the value of the front element without dequeuing it. A queue is an example of a linear data structure, or more abstractly a sequential collection.
Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as an abstract data structure or in object-oriented languages as classes.





2. Write the Algorithm of DFS ?
Answer:
Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking.
Formally, DFS is an uninformed search that progresses by expanding the first child node of the search tree that appears and thus going deeper and deeper until a goal node is found, or until it hits a node that has no children. Then the search backtracks, returning to the most recent node it hasn't finished exploring. In a non-recursive implementation, all freshly expanded nodes are added to a stack for exploration.



Example:
For the following graph:



a depth-first search starting at A, assuming that the left edges in the shown graph are chosen before right edges, and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The edges traversed in this search form a Trémaux tree, a structure with important applications in graph theory.
Performing the same search without remembering previously visited nodes results in visiting nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and never reaching C or G.
3. Write short notes on Control Structure?
Answer:
C language possesses such decision making capabilities and supports the following statements known as control or decision-making statements.
1.      if statement
2.      switch statement
3.      Conditional operator statement
4.      goto statement
1. if statement
The if statement is a powerful decision making statement and is used to controlthe flow of execution of statements. It is basically a two-way decision statement and is used in conjunction with an expression.




Syntax

if (conditional)
{
 block of statements executed if conditional is true;
}
else
{
block of statements if condition false;
}

Example:

main()
{
int x=5
if (x > 1)
{
x=x+10;
}
printf("%d", x);
}
2.Switch Statement:
The switch and case statements help control complex conditional and branching operations.
The switch statement transfers control to a statement within its body.

Syntax: 
switch (expression) {
case item: statements; break;
case item: statements; break;
case item: statements; break;
default: statement; break;
}

Example:

#include
main(){
 int numb;
printf("Type any Number");
scanf("%d",&numb);
    
        switch(numb %2)
        {
            case 0 : printf("the number %d is even  ", numb);
         
            case 1 : printf("the number %d is odd  ", numb);
                        break;
        }

}
5.      Conditional operator statement
Ternary condition
The ? (ternary condition) operator is a more efficient form for expressing simple if
statements. It has the following form:
expression1 ? expression2:  expression3
Example:
res = (a>b) ? a : b;
if a is greater than b than res has the value a else the res has value b.
6.      The goto statement
The goto is a unconditional branching statement used to transfer control of the program from one statement to another.
One must ensure not to use too much of goto statement in their program because its
functionality is limited. It is only recommended as a last resort if structured solutions are
much more complicated.
                                                PART B
Answer the Following Questions: (1 x 10= 10 marks) with 300 words
1.      Explain with the Algorithm of Quick sort and Heap Sort ?
Answer:
We will see two more sorting algorithms in this lecture. The first, heapSort, is
very interesting theoretically. It sorts an array with n items in time Θ(n lg n) in
place (i.e., it doesn’t need much extra memory). The second algorithm, quickSort,
is known for being the most efficient sorting algorithm in practice, although it
has a rather high worst-case running time of Θ(n2).


Quicksort
The sorting algorithm quickSort, like mergeSort, is based on the divide-andconquerparadigm. As opposed to mergeSort and heapSort, quickSort has a relatively bad worst case running time of Θ(n2). However, quickSort is very fast in practice, hence the name. Theoretical evidence for this behaviour can be provided
by an average case analysis. The average-case analysis of quickSort is too technical for Informatics 2B, so we will only consider worst-case and best-case here. If you take the 3rd-year Algorithms and Data Structures (ADS) course, you will see the average-case analysis there.

The algorithm quickSort works as follows:

(1) If the input array has less than two elements, there is nothing to do. Otherwise, partition the array as follows: Pick a particular key called the pivot and divide the array into two subarrays, of which the first only contains
items whose key is smaller than or equal to the pivot and the second only items whose key is greater than or equal to the pivot.

(2) Sort the two subarrays recursively. Note that quickSort does most work in the “divide” step (i.e., in the partitioning routine), whereas in mergeSort the dividing is trivial, but the “conquer” step must reassemble the recursively sorted subarrays using the merge method. This is not necessary in quickSort, because after the first step all elements in the first subarray are smaller than those in the second subarray. A problem, which is responsible for the bad worst-case running time of quickSort, is that the partitioning step is not guaranteed to divide the array into two subarrays of the same size (if we could enforce this somehow, we would have a Θ(n lg n) algorithm). If we implement partitioning in an obvious way, all items can end up in one of the two subarrays, and we only reduce the size of our problem by 1. Algorithm 8.3 is a pseudo-code implementation of the main algorithm.


Algorithm quickSort(A, i, j)
1. if i < j then
2. split partition(A, i, j)
3. quickSort(A, i, split)
4. quickSort(A, split + 1, j)

Heapsort
To understand the basic principle behind heapSort, it is best to recall maxSort (Algorithm 8.1), a simple but slow sorting algorithm (given as Exercise 4 of Lecture Note 2). The algorithm maxSort repeatedly picks the maximum key in thesubarray it currently considers and puts it in the last place of this subarray, where it belongs. Then it continues with the subarray containing all but the last item.

Algorithm maxSort(A)

1. for j A.length 1 downto 1 do
2. m 0
3. for i = 1 to j do
4. if A[i].key > A[m].key then m i
5. exchange A[m],A[j

The algorithm heapSort follows the same principle, but uses a heap to find efficiently the maximum in each step. We will define heapSort by building on the methods of the Lecture Note on Priority Queues and Heaps. However, for
heapSort, we only need the following two methods:

removeMax(): Return and remove an item with maximum key.

buildHeap(A) (and by implication, heapify()): Turn array A into a heap.
Notice that because we provide all the items at the beginning of the algorithm (our use of the heap is static rather than dynamic), we do not need the method insertItem() as an individual method. With the implementations explained in theLecture Note on Priority Queues and Heaps, removeMax() has a running time of Θ(lg n) and buildHeap has a running time of Θ(n).With these methods available, the implementation of heapSort is very simple. Consider Algorithm 8.2. To see that it works correctly, observe that an array A of size n is in sorted (increasing) order if and only if for every j with 1 j A.length1 = n 1, the entry A[j] contains a maximum element of the subarray A[0 . . . j]. Line 3 of heapSort() ensures that at each index from A.length 1 down to 0, we always insert a maximum element of the remaining collection of elements into A[j] (and remove it from the heap, reducing the collection of elements there). Hence it sorts correctly.

Algorithm heapSort(A)
1. buildHeap(A)
2. for j A.length 1 downto 0 do
3. A[j] removeMax()


No comments:

Post a Comment