A Journey Through the World of Algorithms

13 min readMar 12, 2024

It’s been a while since I last wrote an article, and today’s writing will focus on the algorithms that are a constant headache for us. I intend to gather a summary of algorithm information as much as I have read and researched. It’s going to be a bit lengthy, but I’m excited to delve into it with you.

To understand the concept of algorithms, I think it’s necessary to have some grasp of data structures. If you’d like to take a look, I’m including some links here before we start.

The Topics We Will Cover In This Article:

  • What is the Algorithm?
  • Fundamentals of Algorithm Analysis
  • Performance Analysis of an Algorithm
  • Big-O Notation
  • Best Case & Worst Case Scenarios
  • Algorithm Design Techniques
  • Algorithm Search Strategies
  • Examples of some well-known algorithms

Grab your coffee, and let’s get started on this article.

Algorithms

An algorithm is a set of instructions for accomplishing a task. The purpose of the algorithm is to provide clear instructions for solving a problem, specifically obtaining the required output for any valid input within a specific timeframe. Every piece of code could be called an algorithm.

The use of algorithms is widespread in problem-solving, efficiency improvement, and innovation across various domains. Their importance lies in the advancements in technology, science, and practical applications, as they offer structured and effective solutions to a variety of challenges.

From a drawing in my notebook

Methods of Specifying an Algorithm

Sometimes an algorithm is described in words (in a free and a step-by-step form) or in pseudocode.

Using a natural language has an obvious appeal; however, the inherent ambiguity of any natural language makes a succinct and clear description of algorithms surprisingly difficult.

Nevertheless, being able to do this is an important skill that you should strive to develop in the process of learning algorithms.

  • Natural Language: Algorithms can be described using natural language, like English. This method is accessible to a wide audience but may lack the precision and unambiguous nature required for complex algorithms.
  • Pseudocode: An algorithm that combines elements of both natural language and programming language conventions is known as pseudocode, which is an informal, high-level description. It is more structured than natural language and helps bridge the gap between human understanding and code. Pseudocode, as the name suggests, is a false code or a representation of code that can be understood by even a layman with some school-level programming knowledge.
Example: Compute Fibonacci numbers till 50

Pseudo Code


1-Declare an integer variable called n
2-Declare an integer variable sum
3-Declare an integer variable f1
4-Declare an integer variable f2
5-Set sum to 0
6-Set f1 and f2 to 1
7-Set n to 50
8-Repeat n times
a.sum=f1+f2
b.f2=f1
c.f1=sum
d.print sum
9-End loop
  • Flowcharts: In the earlier days of computing, the dominant vehicle for specifying algorithms was a flowchart, a method of expressing an algorithm by a collection of connected geometric shapes containing descriptions of the algorithm’s steps.
  • Programming Language Code: A highly accurate method is to write the algorithm directly in a programming language. It is suitable when the goal is to implement the algorithm in software. Nevertheless, it may be less accessible to non-programmers.
// Recursive implementation of Fibonacci Series

class example {

static int fib(int n){
if (n <= 1)
return n;
return fib(n - 1) + fib(n - 2);
}

public static void main(String args[]){
int N = 50;
for (int i = 0; i < N; i++) {
System.out.println(fib(i) + " ");
}
}
}

Fundamentals of Algorithm Analysis

We usually want our algorithms to possess several qualities. After correctness, by far the most important is efficiency.

There are two kinds of algorithm efficiency: time efficiency, indicating how fast the algorithm runs, and space efficiency, indicating how much extra memory it uses.

Another desirable characteristic of an algorithm is simplicity. Unlike efficiency, which can be precisely defined and investigated with mathematical rigor, simplicity, like beauty, is to a considerable degree in the eye of the beholder. Sometimes simpler algorithms are also more efficient than more complicated alternatives. Unfortunately, it is not always true, in which case a judicious compromise needs to be made.

Yet another desirable characteristic of an algorithm is generality. There are, in fact, two issues here: the generality of the problem the algorithm solves and the set of inputs it accepts.

If you are not satisfied with the algorithm’s efficiency, simplicity, or generality, you must return to the drawing board and redesign the algorithm.

Another important issue of algorithmic problem-solving is the question of whether or not every problem can be solved by an algorithm. We are not talking here about problems that do not have a solution, such as finding real roots of a quadratic equation with a negative discriminant. For such cases, an output indicating that the problem does not have a solution is all we can and should expect from an algorithm.

We can give the Königsberg Bridge Problem as an example of an algorithm that has no solution. If you’re wondering what it is, I’ve added a link here for you to research.

Nor are we talking about ambiguously stated problems. Even some unambiguous problems that must have a simple yes or no answer are undecidable, i.e., unsolvable by any algorithm.

To sum up, implementing an algorithm correctly is necessary but not sufficient: you would not like to diminish your algorithm’s power by an inefficient implementation.

Performance Analysis of an Algorithm

Various solutions or algorithms must be evaluated for their performance and efficiency, including the time it takes to run/execute and the total memory consumed.

The performance of an algorithm arises from the question of how to measure the performance of that algorithm.

This problem arises when choosing the “best” algorithm among the algorithms presented to solve a problem. Three basic approaches are commonly used to measure the performance of an algorithm: These are experimental analysis, probability (average case) analysis and worst-case analysis. Our priority will be to concentrate on worst-case analysis, which is connected to the Big-O notation that we intend to cover.

Worst-case analysis is the analysis of the deviation from optimal that occurs when a special heuristic H is applied to instances of a problem P. The complexity of an algorithm is related to its execution time. The time required by an algorithm, also known as the running time, depends on both the structure and size of the data.

The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size.

To compare and rank such orders of growth, computer scientists use three notations: Big oh, big omega, and big theta. Of these 3 methods, we will only focus on Big-O notation.

Big-O Notation

In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.

Algorithm Design Techniques

Algorithm design techniques (or paradigm or strategy) is a general approach to solving problems algorithmically that applies to a variety of problems from different areas of computing.

Some Design Techniques of Algorithm

SOME EXAMPLES of ALGORITHMS

DIVIDE & CONQUER

Divide-and-conquer algorithms work according to the following general plan:

  • A problem is divided into several subproblems of the same type, ideally of about equal size.
  • The subproblems are solved (typically recursively, though sometimes a different algorithm is employed, especially when subproblems become small enough).
  • If necessary, the solutions to the subproblems are combined to get a solution to the original problem.

Merge Sort

Merge Sort is a perfect example of a successful application of the divide and conquer technique. It’s defined as a sorting algorithm that works by dividing an array into smaller subarrays, sorting each subarray, and then merging the sorted subarrays back together to form the final sorted array.

Merge sort is a recursive algorithm that continuously splits the array in half until it cannot be further divided i.e., the array has only one element left (an array with one element is always sorted). Then the sorted subarrays are merged into one sorted array.
https://en.m.wikipedia.org/wiki/File:Merge-sort-example-300px.gif

Applications of Merge Sort:

  • Sorting large datasets (Worst-case time complexity of O(n log n))
  • External sorting
  • Custom sorting
  • Inversion Count Problem

Let’s code:

Quick Sort

Quicksort is the other important sorting algorithm that is based on the divide-and-conquer approach. Unlike mergesort, which divides its input elements according to their position in the array, quicksort divides them according to their value.

https://en.m.wikipedia.org/wiki/File:Quicksort-example.gif

The key process in “Quick Sort” is a partition(). Partitions aim to position the pivot (which can be any element) in the correct position in the sorted array. Place all smaller elements to the left of the pivot, and all larger elements to the right of the pivot.

Let’s code:

Binary Trees

The most important divide-and-conquer algorithms for binary trees are the three classic traversals: preorder, inorder, and postorder. All three traversals visit nodes of a binary tree recursively, i.e., by visiting the tree’s root and its left and right subtrees. They differ only by the timing of the root’s visit:

  • In the preorder traversal, the root is visited before the left and right subtrees are visited (in that order).
An example of “Preorder Traversal”
  • In the inorder traversal, the root is visited after visiting its left subtree but before visiting the right subtree.
An example of “Inorder Traversal”
  • In the postorder traversal, the root is visited after visiting the left and right subtrees (in that order).
An example of “Postorder Traversal”

TRANSFORM & CONQUER

Transform-and-conquer work as two-stage procedures. First, in the transformation stage, the problem’s instance is modified to be, for one reason or another, more amenable to a solution. Then, in the second or conquering stage, it is solved.

  • Presorting
  • Trees
  • Heap & Heapsort

Let’s choose the “Heapsort Algorithm” to examine.

Heapsort

A Heap is a Complete Binary Tree in which the value at each node is either greater or smaller than its child node. There are two types of Heap:

  • Max Heap: The value of each node is GREATER than its children.
  • Min Heap: The value of each node is SMALLER than its children.

Heapify is the operation of building a heap from the elements of an array. Heapify Operation can be used to build a max heap or min heap.

https://commons.wikimedia.org/wiki/File:Heap_sort_example.gif

In the tree example given below, all parents are placed smaller than their children. (Min-heap)

In order not to make the article any longer, we will skip the deletion and data addition operations and look at the sorting.

DYNAMIC PROGRAMMING

Dynamic programming is a computer programming technique where an algorithmic problem is first broken down into sub-problems, the results are saved, and then the sub-problems are optimized to find the overall solution — which usually has to do with finding the maximum and minimum range of the algorithmic query.

  • Coin row
  • Knapsack Problem
  • Warshall’s and Floyd’s Algorithms

Coin Row Problem

Some coins, of various denominations, are arranged in a line.

Two players, take turns removing a coin at either end of the line. They are free to choose which end of the line to take a coin from, but taking a coin from the middle is not allowed. The winner of the game is the player who collects the most amount of money.

Peter Winkler, in his book Mathematical Puzzles, describes a simple strategy that is guaranteed to work but only if the number of coins is even.

If the number of coins is EVEN?

If the sum of odd coins is greater, then player 1 starts the game with the 1st coin and keeps on taking the odd-numbered coins. Otherwise, player 1 takes the last coin and keeps on taking the even-numbered coins. This is possible to do because player 1 has a choice at the start of the game to either pick the first coin, which is odd, or the last coin, which is even. Once player 1 picks up the coin of its choice, player 2 is always faced with opposite parity coins at both ends. So no matter which end player 2 takes a coin from, player 1 can pick up a coin from the same end and will be guaranteed not to lose the game.

GREEDY

These requirements explain the technique’s name: on each step, it suggests a “greedy” grab of the best alternative available in the hope that a sequence of locally optimal choices will yield a (globally) optimal solution to the entire problem.

  • Graph Problem
  • Dijkstra’s Algorithm
  • Spanning Tree
  • Prim Algorithm

Prim Algorithm

Prim’s minimal spanning tree algorithm is one of the efficient methods to find the minimum spanning tree of a graph.

The algorithm, similar to any shortest path algorithm, begins from a vertex that is set as a root and walks through all the vertices in the graph by determining the least cost adjacent edges.

STEP 1

STEP 2

STEP 3

STEP 4

STEP 5

Brute Force

Brute Force Algorithms are exactly what they sound like — straightforward methods of solving a problem that relies on sheer computing power and trying every possibility rather than advanced techniques to improve efficiency.

  • Bubble Sort
  • Selection Sort
  • Breadth-First Search
  • Depth-First Search

Bubble Sort

Bubble Sort is a simple sorting algorithm that swaps adjacent elements continuously if they are in the wrong order. The average and worst-case time complexity of this algorithm makes it unsuitable for large data sets.

https://commons.wikimedia.org/wiki/File:Bubble-sort-example-300px.gif

Suppose we are trying to sort the elements in ascending order.

STEP 0

STEP 1

STEP 2

STEP 3

Let’s code…

In the above algorithm, all the comparisons are made even if the array is already sorted!

Selection Sort

Selection sort is a sorting algorithm that selects the smallest element from an unsorted list in each iteration and places that element at the beginning of the unsorted list.

https://codepumpkin.com/selection-sort-algorithms/#google_vignette

STEP 1

  • Set the first element as the minimum.

STEP 2

  • Compare minimum with the third element. Again, if the third element is smaller, then assign a minimum to the third element otherwise do nothing. The process goes on until the last element.

STEP 3

  • After each iteration, the minimum is placed in the front of the unsorted list.

STEP 4

  • For each iteration, indexing starts from the first unsorted element. Steps 1 to 3 are repeated until all the elements are placed in their correct positions.

Let’s coding…

Breadth-First Search

Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on.

Implementation:

  • fringe is a FIFO queue, i.e., new successors go at the end
  • TREE-SEARCH(problem, FIFO-QUEUE())
https://commons.wikimedia.org/wiki/File:Breadth-First-Search-Algorithm.gif

Depth-First Search

Depth-first Search or Depth-first traversal is a recursive algorithm for searching all the vertices of a graph or tree data structure. Traversal means visiting all the nodes of a graph.

  • Expand the deepest unexpanded node
  • Search proceeds immediately to the deepest level of the search tree -> Nodes have no successors
https://en.m.wikipedia.org/wiki/File:Depth-First-Search.gif

As a university student, I try to write by analyzing what I have learned from the lessons I have taken and what I am curious about. I believe I have acquired more knowledge. Feel free to share your comments below.

P.S. Freeform was used to create all the images I didn’t reference.

To follow me on LınkedIn, GitHub

REFERENCES

https://www.baeldung.com/java-merge-sort

https://edu.anarcho-copy.org/Algorithm/grokking-algorithms-illustrated-programmers-curious.pdf

https://edu.anarcho-copy.org/Algorithm/grokking-algorithms-illustrated-programmers-curious.pdf

https://en.wikipedia.org/wiki/Merge_sort

--

--

Written by Zehra Gökçe Aynacı

Learning, growing, and chasing excellence—one challenge at a time.

No responses yet