Algorithm Visualizer
Interactive visualizations to help you understand how algorithms work step-by-step.
Sorting Algorithms
View AllBubble Sort
easyA simple sorting algorithm that repeatedly steps through the list, compares adjacent elements and swaps them if they are in the wrong order. With a time complexity of O(n²), it's inefficient for large datasets but is easy to implement and understand. The algorithm gets its name because smaller elements 'bubble' to the top of the list with each iteration.
Selection Sort
easyThe selection sort algorithm sorts an array by repeatedly finding the minimum element from the unsorted part and putting it at the beginning. Unlike bubble sort, it makes only O(n) swaps, making it useful when write/swap operations are expensive. However, its O(n²) time complexity makes it inefficient for large datasets. Selection sort is not stable, meaning equal elements might not maintain their relative positions.
Insertion Sort
easyInsertion sort iterates through an array and at each iteration it removes one element, finds the location where it belongs and inserts it there. While it has an average time complexity of O(n²), it performs exceptionally well on small or nearly-sorted arrays with best-case O(n) performance. Insertion sort is an adaptive, stable, in-place algorithm that works similarly to how people sort playing cards in their hands.
Searching Algorithms
View AllLinear Search
easyLinear search sequentially checks each element of the list until it finds an element that matches the target value. With a time complexity of O(n), it's the simplest searching algorithm but becomes inefficient for large datasets. One advantage is that it works on unsorted arrays and doesn't require any preprocessing. Linear search is practical for small arrays or when the target is likely to be found early in the sequence. It's also useful when searching rarely happens or when elements are frequently added and removed.
Binary Search
mediumBinary search finds the position of a target value within a sorted array by repeatedly dividing the search interval in half. With a logarithmic time complexity of O(log n), it's dramatically more efficient than linear search for large datasets. Binary search requires the array to be sorted beforehand, making it ideal for situations where searching occurs frequently on relatively static data. This algorithm is the foundation for many efficient data structures like binary search trees and is widely used in database systems, dictionaries, and numerous programming applications.
Graph Algorithms
View AllDepth-First Search
mediumDepth-First Search (DFS) is a graph traversal algorithm that explores as far as possible along each branch before backtracking. It uses a stack data structure (often implemented using recursion) to keep track of vertices to visit next. DFS has applications in topological sorting, finding connected components, solving mazes, and detecting cycles in graphs.
Breadth-First Search
mediumBreadth-First Search (BFS) is a graph traversal algorithm that explores all vertices at the present depth level before moving on to vertices at the next depth level. It uses a queue to keep track of the next vertices to visit, ensuring that vertices are visited in order of their distance from the source vertex. BFS is commonly used for finding the shortest path in unweighted graphs, connected components, and solving puzzles with the fewest possible moves.
Dijkstra's Algorithm
hardDijkstra's Algorithm is a graph search algorithm that solves the single-source shortest path problem for a graph with non-negative edge weights. It works by maintaining a set of vertices whose shortest distance from the source is already known and repeatedly selecting the vertex with the minimum distance value, updating the distance values of its adjacent vertices. Dijkstra's algorithm is widely used in network routing protocols and as a subroutine in other graph algorithms.
Data Structures
View AllBinary Search Tree (Insert)
mediumA binary search tree (BST) maintains the invariant that every node's left subtree contains only smaller values and its right subtree only larger values. Inserting a value walks the tree from the root, branching left or right at each node by comparison, until it reaches a missing child where the new node is attached. Average insertion is O(log n) when the tree is balanced; the worst case degrades to O(n) when input order produces a skewed tree, motivating self-balancing variants like AVL and red-black trees.
Min-Heap (Insert)
mediumA binary min-heap is a complete binary tree where every parent is less than or equal to its children. Stored as an array, the children of index i live at 2i+1 and 2i+2, so the structure stays compact without explicit pointers. Each insertion appends the new value at the end and 'sifts up' — repeatedly swapping it with its parent while the parent is larger — restoring the heap property in O(log n). Heaps are the backbone of priority queues, heap sort, and shortest-path algorithms like Dijkstra's.
Dynamic Programming
View AllEdit Distance
hardEdit distance — also called Levenshtein distance — measures the minimum number of single-character insertions, deletions, or substitutions needed to turn one string into another. The classic dynamic-programming solution fills an (m+1)×(n+1) table where dp[i][j] is the cost to transform the first i characters of one string into the first j of the other. Each cell depends only on its top, left, and top-left neighbours, so the table fills row-by-row in O(m·n). The traceback through the dependencies reconstructs an optimal edit script. Variants underpin spell-checkers, DNA alignment, and diff tools.
Longest Common Subsequence
mediumThe longest common subsequence (LCS) of two sequences is the longest sequence appearing in both as a subsequence — preserving order but not contiguity. The standard dynamic-programming solution fills an (m+1)×(n+1) table where dp[i][j] is the LCS length of the first i and j characters. When characters match, the cell extends the diagonal by one; otherwise it inherits the larger of its top and left neighbours. The table fills in O(m·n), and a traceback recovers the actual subsequence. LCS is the workhorse behind Unix `diff`, version control merge, and computational biology sequence alignment.
Pathfinding & Spatial
View AllA* Pathfinding
mediumA* finds the shortest path from a start cell to a goal cell on a grid by combining the cost so far (g) with an admissible estimate of the remaining cost (h, here Manhattan distance). It always expands the open-set node with the smallest f = g + h, which lets it focus exploration toward the goal rather than fanning out uniformly like Dijkstra's. With an admissible, consistent heuristic, A* is guaranteed to return an optimal path, and it routinely outperforms uninformed search on spatial maps. Pathfinding in games, robotics, and routing systems is the canonical application.
Dijkstra (Grid)
mediumDijkstra's algorithm specialised to a grid finds the shortest path from a start cell to a goal cell when every step costs the same. It expands the open-set node with the smallest tentative distance, relaxing each unblocked neighbour, until it pops the goal. Unlike A*, it has no heuristic — every direction is treated equally — so it explores the frontier as an expanding ring around the start. The grid version makes the difference from heuristic-guided search tangible: side-by-side with A*, you see exactly how much exploration the heuristic saves.
Recursive Backtracker (Maze)
mediumThe recursive backtracker — also called randomised depth-first search — generates a perfect maze (one with exactly one path between any two cells) by carving passages from a starting cell. At each step it picks a random unvisited neighbour two cells away, knocks down the wall between them, and recurses; when no unvisited neighbours remain, it backtracks. The result is a long, winding maze with few short branches, distinct from algorithms like Prim's or Wilson's that produce more uniform structure. Mazes generated this way are a common test bed for pathfinding algorithms.