5 Algorithms that have wide Applications in Industry

what is Docsity.com

You will find that some of the more interesting and powerful algorithms in a problem solving setting find little use in the industry. The goal for presenting these algorithms is to show how widely applicable and useful they are in their own right in the field. The benefits of some of the algorithms may be hard to explain to someone who has not yet been introduced to them, but I tried my best of giving a good viewpoint of their role. The feature covers some algorithms which are quite applicable in industry and are presented as follows:

 1) Map-Reduce

MapReduce Animation

Image Courtesy: systems-deployment.com

‘MapReduce’ is a framework for processing parallelizable problems across huge datasets using a large number of computers, collectively referred to as a cluster or a grid. Computational processing can occur on data stored either in a file system or in a database. MapReduce can take advantage of locality of data, processing it on or near the storage assets in order to reduce the distance over which it must be transmitted. MapReduce’s stable inputs and outputs are usually stored in a distributed file system. The transient data is usually stored on local disk and fetched remotely by the reducers. This program basically works in two steps:

-Map step: The master node takes the input, divides it into smaller sub-problems, and distributes them to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes the smaller problem, and passes the answer back to its master node.

-Reduce step: The master node then collects the answers to all the sub-problems and combines them in some way to form the output, which is the answer to the problem it was originally trying to solve.

MapReduce has proved its usefulness in a wide range of applications which include distributed pattern-based searching, distributed sorting, machine learning, and statistical machine translation. At Google, MapReduce was used to completely regenerate Google’s index of the World Wide Web. It replaced the old ad hoc programs that updated the index and ran the various analyses. MapReduce are available in a variety of programming languages, including Java, C++, Python, Perl, Ruby, and C.

-Limitations

MapReduce tasks must be written as acyclic dataflow programs, i.e. a stateless mapper followed by a stateless reducer, that are executed by a batch job scheduler. This paradigm makes repeated querying of datasets difficult and imposes limitations that are felt in fields such as machine learning, where iterative algorithms that revisit a single working set multiple times is considered to be normal.

2) Naive Bayes Classifier

A Naive demonstration for the k-fold cross validation.

Image Courtesy: imada.sdu.dk

Classification is a natural first step to machine learning and the Naive Bayes classifier combines that with the trending concept of Bayesian inference. In machine learning, Naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes’ theorem with strong independence assumptions between the features. In simple terms it can be said that a Naive Bayes classifier assumes that any value of a particular feature is unrelated to the presence or the absence of any other feature. For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods.

An advantage of Naive Bayes is that it only requires a small amount of training data to estimate the parameters (means and variances of the variables) necessary for classification. Because independent variables are assumed, only the variances of the variables for each class need to be determined and not the entire covariance matrix. The assumptions on distributions of features in the Naïve Bayes classifier are called the event model and for discrete features like the ones encountered in document classification (include spam filtering), multinomial and Bernoulli distributions are popular. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests. But despite its simplicity, it can address key problems like spam classification.

3) Floyd–Warshall Algorithm

Floyd Warshall All Pairs Shortest Path Algorithm

Image Courtesy: cs.nju.edu.cn

In computer science, the Floyd–Warshall algorithm is a graph analysis algorithm for finding shortest paths in a weighted graph with positive or negative edge weights and also for finding transitive closure of a relation R. The Floyd–Warshall algorithm was published in its currently recognized form by Robert Floyd in 1962. The algorithm compares all possible paths through the graph between each pair of vertices. This is remarkable considering that there may be up to Ω(|V|2) edges in the graph, and every combination of edges is tested. It does so by incrementally improving an estimate on the shortest path between two vertices, until the estimate is optimal. The algorithm is an example of dynamic programming.

If w (i, j) is the weight of the edge between vertices i and j, we can define shortestPath (ijk + 1) in terms of the following recursive formula

shortestPath (ijk + 1) = min (shortestPath (i, j, k), shortestPath (i, k+1, k) + shortestPath (k+1, j, k)

The given formula is the basis of the Floyd-Warshall algorithm. The algorithm works by first computing shortestPath (ijk) for all (ij) pairs for k = 1, then k = 2, etc. This process continues until k = n, and we have found the shortest path for all (ij) pairs using any intermediate vertices.

The algorithm can be used to solve number of problems which include shortest paths in directed graphs, inversion of real matrices, optimal routing and quick computation of Pathfinder networks. The implementations of Floyd-Warshall algorithm are available in various programming languages which include C++, C#, Java, MATLAB, Perl, Python and R.

4) Hill Climbing

The Hill-climbing Search Algorithm

Image Courtesy: pages.cpsc.ucalgary.ca

Hill climbing is a mathematical optimization technique which belongs to the family of local search. This iterative algorithm starts with an arbitrary solution to a problem, then attempts to find a better solution by incrementally changing a single element of the solution. If the change produces a better solution, an incremental change is made to the new solution, repeating until no further improvements can be found. Mathematically it can be described a technique that attempts to maximize or minimize a target function f(x) where ‘x’ is either a vector of continuous or discrete values. After each iteration hill climbing will adjust a single element in ‘x’ and determine whether the change improves the value of f(x). The change that improves f(x) is accepted in the Hill Climbing algorithm. The process continues till the time that no change can be found to improve the value of f(x) . At that time, x’ is said to be locally optimal. The gradient descent method varies from hill climbing in this manner that with each iteration it changes the value of ‘x’ according to the gradient of the hill.

The relative simplicity of the algorithm makes it a popular first choice amongst optimizing algorithms. The main problem that hill climbing can encounter is that of local maxima. This occurs when the algorithm stops making progress towards an optimal solution; mainly due to the lack of immediate improvement in adjacent states. Hill climbing can often produce a better result than other algorithms when the amount of time available to perform a search is limited, such as with real-time systems. It is an anytime algorithm such that it can return a valid solution even if it’s interrupted at any time before it ends. It is applicable to many problems where the current state allows for an accurate evaluation function e.g. the travelling salesman problem and the eight-queens problem. Furthermore, Hill Climbing has also been used in artificial intelligence and robotics to mange multiple robot teams.

5) Shor’s Algorithm

Shor Quantum Algorithm

Image Courtesy: csee.umbc.edu

Shor’s Algorithm is a quantum algorithm for integer factorization formulated in 1994. An algorithm is called efficient if its execution time i.e. the number of elementary operations is asymptotically polynomial in the length of its input measured in bits. On a quantum computer, to factor an integer N, Shor’s algorithm runs in polynomial time (the time taken is polynomial in log N, which is the size of the input). Specifically it takes time O ((log N)3), demonstrating that the integer factorization problem can be efficiently solved on a quantum computer. This is substantially faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time. The factorization also needs huge amount of quantum gates. It increases with N as (log N)3. Thus factoring of a 4096-bit number requires 4,947,802,324,992 quantum gates. The algorithm is composed of two parts. The first part of the algorithm turns the factoring problem into the problem of finding the period of a function, and may be implemented classically. The second part finds the period using the quantum Fourier transform, and is responsible for the quantum speedup.

Furthermore, if a quantum computer with a sufficient number of qubits could operate without succumbing to noise, Shor’s algorithm could be used to break public-key cryptography schemes such as the RSA scheme. RSA scheme is based on the assumption that factoring large numbers is computationally infeasible. This assumption is valid for classical, non-quantum, computers as there is no known classical algorithm which can factor in polynomial time but Shor’s algorithm shows that factoring is possible on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. The efficiency of Shor’s algorithm is due to the efficiency of the quantum Fourier transform, and modular exponentiation by repeated squarings. In 2001, Shor’s algorithm was demonstrated by a group at IBM, who factored 15 into 3 × 5, using an NMR implementation of a quantum computer with 7 qubits. In 2012, the factorization of 21 was achieved, setting the record for the largest number factored with a quantum computer.  In April 2012, the factorization of 143 was achieved, although it was done by using adiabatic quantum computation rather than Shor’s algorithm.

Get ready for your exams with the best study resources

Sign up to Docsity to find the best documents to prepare your next exam!