Algorithm Design Techniques


Algorithm -> a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer

An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions. A computer program can be viewed as an elaborate algorithm. In mathematics and computer science, an algorithm usually means a small procedure that solves a recurrent problem.

Case Study –

Let’s consider few problems to solve step by step and analyze the algorithms.

Tower of Hanoi problem.
The Tower of Hanoi problem was invented by the French mathematician Edouard Lucas in 1883. In this problem there are three disk and three peg using these three peg we need to sort the disc in there ascending order.

So the objective is to move all the disks to some another tower without violating the sequence of arrangement. Rules to be followed are −

1)     Only one disk can be moved among the towers at any given time.
2)     Only the "top" disk can be removed.
3)     No large disk can sit over a small disk.

Solutions:-
1)     Move the top smallest disc to third peg
2)     Move 2nd largest disc to 2nd peg
3)     Move smallest disc from 3rd peg to 2nd
4)     Move largest disc to 3rd peg and smallest disc to 1st peg,
5)     Move 2nd largest disc to 3rd peg than smallest disc to 3rd peg
Now all disc will be on 3rd peg

Let’s discuss about different algorithm design techniques.
Although more than one technique may be applicable to a specific problem, it is often the case that an algorithm constructed by one approach is clearly superior to equivalent solutions built using alternative techniques.

Different approaches to design an algorithm

Brute Force

Brute force is a straightforward approach to solve a problem based on the problem’s statement and definitions of the concepts involved. It is considered as one of the easiest approach to apply and is useful for solving small – size instances of a problem.
Some examples of brute force algorithms are:

1)     Selection sort
2)     Bubble sort
3)     Sequential search
4)     Exhaustive search: Traveling Salesman Problem, Knapsack problem.

Greedy Algorithms "take what you can get now" strategy

The solution is constructed through a sequence of steps, each expanding a partially constructed solution obtained so far. At each step the choice must be locally optimal – this is the central point of this technique.

Examples:

1)     Minimal spanning tree
2)     Shortest distance in graphs
3)     Greedy algorithm for the Knapsack problem
4)     The coin exchange problem
5)     Huffman trees for optimal encoding
6)     Greedy techniques are mainly used to solve optimization problems. They do not always give the best solution.

Divide-and-Conquer, Decrease-and-Conquer

These are methods of designing algorithms that (informally) proceed as follows:

Given an instance of the problem to be solved, split this into several smaller sub-instances (of the same problem), independently solve each of the sub-instances and then combine the sub-instance solutions so as to yield a solution for the original instance.

With the divide-and-conquer method the size of the problem instance is reduced by a factor (e.g. half the input size), while with the decrease-and-conquer method the size is reduced by a constant.

Examples of divide-and-conquer algorithms:

1)     Computing an (a > 0, n a nonnegative integer) by recursion
2)     Binary search in a sorted array (recursion)
3)     Merge sort algorithm, Quicksort algorithm  (recursion)
4)     The algorithm for solving the fake coin problem (recursion)

Examples of decrease-and-conquer algorithms:

1)     Topological sorting
2)     Binary Tree traversals: inorder, preorder and postorder (recursion)
3)     Computing the length of the longest path in a binary tree (recursion)
4)     Computing Fibonacci numbers (recursion)
5)     Reversing a queue (recursion)
6)     Warshall’s algorithm (recursion)

Dynamic Programming

One disadvantage of using Divide-and-Conquer is that the process of recursively solving separate sub-instances can result in the same computations being performed repeatedly since identical sub-instances may arise.

The idea behind dynamic programming is to avoid this pathology by obviating the requirement to calculate the same quantity twice.

The method usually accomplishes this by maintaining a table of sub-instance results.

Dynamic Programming is a Bottom-Up Technique in which the smallest sub-instances are explicitly solved first and the results of these used to construct solutions to progressively larger sub-instances.

In contrast, Divide-and-Conquer is a Top-Down Technique which logically progresses from the initial instance down to the smallest sub-instance via intermediate sub-instances.

 Examples:

1)     Fibonacci numbers computed by iteration.
2)     Warshall’s algorithm implemented by iterations

Transform-and-Conquer
These methods work as two-stage procedures. First, the problem is modified to be more amenable to solution. In the second stage the problem is solved.

Types of problem modifications

1)     Problem simplification e.g. presorting
2)     Change in the representation
3)     Problem reduction

Backtracking and branch-and-bound: generate and test methods

The method is used for state-space search problems. State-space search problems are problems, where the problem representation consists of:

§  initial state
§  goal state(s)
§  a set of intermediate states
§  A set of operators that transform one state into another. Each operator has preconditions and post conditions.
§  a cost function – evaluates the cost of the operations (optional)
§  a utility function – evaluates how close is a given state to the goal state (optional)

The solving process solution is based on the construction of a state-space tree, whose nodes represent states, the root represents the initial state, and one or more leaves are goal states. Each edge is labeled with some operator.

Genetic Algorithms
Genetic algorithms (GAs) are used mainly for optimization problems for which the exact algorithms are of very low efficiency.

GAs search for good solutions to a problem from among a (large) number of possible solutions. The current set of possible solutions is used to generate a new set of possible solution.

They start with an initial set of possible solutions, and at each step they do the following:

Evaluate the current set of solutions (current generation)
Choose the best of them to serve as “parents” for the new generation, and construct the new generation.

The loop runs until a specified condition becomes true, e.g. a solution is found that satisfies the criteria for a “good” solution, or the number of iterations exceeds a given value, or no improvement has been recorded when evaluation the solutions.




No comments:

Post a Comment