Introduction
Most developers struggle with algorithms for the same reason: they mix too many tasks at once. They try to understand the problem, invent a strategy, write syntax-correct code, and debug edge cases simultaneously.
A better approach is to split algorithm work into clear stages. This article gives you a repeatable system that makes algorithm design easier, faster, and more reliable.
The Core Principle
Do not start with code. Start with structure.
Your goal is to prove correctness first, then translate to code.
Step 1: Normalize the Problem Statement
Write down four items before any coding:
- Input format.
- Output format.
- Constraints.
- Edge cases.
Example normalization template:
Input: list[int] nums, int target
Output: bool
Constraints: len(nums) up to 1e5
Edge cases: empty list, duplicates, negative values
This prevents scope drift and wrong assumptions.
Step 2: Build Tiny Examples First
Choose small examples and walk through them manually.
Use at least:
- One normal case.
- One minimal case.
- One pathological case.
Why this helps:
- Reveals hidden assumptions early.
- Exposes boundary conditions.
- Gives expected outputs for tests.
Step 3: Identify the Algorithm Family
Many problems map to a known pattern. Detecting that pattern early saves time.
Common families:
- Two pointers.
- Sliding window.
- Hashing.
- Binary search on answer.
- BFS/DFS on graphs and trees.
- Dynamic programming.
- Greedy with proof.
Ask: “What structure do I have?” rather than “What code do I write?”
Step 4: Define Invariants
An invariant is a condition that stays true during the algorithm.
Example for two pointers:
Invariant: all pairs left of i and right of j already evaluated or eliminated.
Example for BFS:
Invariant: nodes are dequeued in non-decreasing distance from source.
Invariants make debugging easier because you can inspect where truth breaks.
Step 5: Write Pseudocode Before Real Code
Pseudocode forces clarity without syntax distractions.
function find_max(nums):
if nums is empty:
return null
best = nums[0]
for each n in nums[1:]:
if n > best:
best = n
return best
If pseudocode feels unstable, code will be worse. Fix logic first.
Step 6: Estimate Complexity Early
Before implementation, estimate:
- Time complexity.
- Space complexity.
- Whether it satisfies constraints.
If constraints are n <= 1e5, an O(n^2) strategy is often dead on arrival.
Quick reference:
O(n)often fine for1e5.O(n log n)often fine for1e5to1e6.O(n^2)usually only safe for smalln.
Step 7: Code as Translation, Not Discovery
At this stage, coding should be mechanical translation of pseudocode.
Python example (clear version)
def find_max(nums: list[int]) -> int | None:
if not nums:
return None
best = nums[0]
for n in nums[1:]:
if n > best:
best = n
return best
Go example (same logic)
func FindMax(nums []int) (int, bool) {
if len(nums) == 0 {
return 0, false
}
best := nums[0]
for _, n := range nums[1:] {
if n > best {
best = n
}
}
return best, true
}
Same idea, different syntax.
Step 8: Test in Layers
Use layered tests:
- Happy path tests.
- Edge case tests.
- Randomized tests against a brute-force baseline.
Brute-force validation pattern
For many interview-style problems:
- Build a simple but slow reference solution.
- Generate random input.
- Compare optimized output with reference output.
This catches subtle logic bugs quickly.
Step 9: Add Assertions for Internal Guarantees
Assertions help validate invariants during development.
assert left <= right
assert window_sum >= 0
Do not ship noisy assertions in performance-critical loops unless needed, but use them while designing.
Step 10: Debug by State, Not Guessing
When output is wrong, avoid random edits. Inspect state transitions.
Debug checklist:
- Are indices moving correctly?
- Is initialization correct?
- Does update order break invariants?
- Are boundary conditions handled?
- Are duplicates or ties handled explicitly?
Pattern Library You Should Memorize
A small pattern library gives huge leverage.
Two Pointers
Great for sorted arrays and pair constraints.
Sliding Window
Great for contiguous subarray/substr problems with dynamic constraints.
Prefix Sum
Great for range sums and subarray counts.
Hash Map / Hash Set
Great for frequency counting and membership tests in O(1) average time.
BFS
Great for shortest path in unweighted graphs.
DFS
Great for connectivity, topological tasks, and recursive structure traversal.
Dynamic Programming
Great when subproblems overlap and optimal substructure exists.
Example: Turning a Problem into a Method
Problem: Find if any two numbers sum to target.
Normalize
Input: nums, target
Output: bool
Constraint: n large
Pattern detection
Use hash set for seen values.
Pseudocode
seen = empty set
for n in nums:
if target - n in seen:
return true
add n to seen
return false
Complexity
Time O(n), space O(n).
Python implementation
def has_pair_sum(nums: list[int], target: int) -> bool:
seen: set[int] = set()
for n in nums:
if target - n in seen:
return True
seen.add(n)
return False
This demonstrates the full process from statement to proof-oriented implementation.
Common Mistakes That Make Algorithms Harder
- Coding before constraints are clear.
- Ignoring edge cases until the end.
- Choosing cleverness over readability.
- Mixing optimization with initial correctness.
- Not writing tests from manual examples.
A 15-Minute Practice Routine
Use this routine daily:
- 3 minutes: normalize the problem.
- 4 minutes: manual examples and edge cases.
- 3 minutes: pseudocode and complexity.
- 5 minutes: implementation + one validation test.
This trains algorithm thinking faster than random coding drills.
Conclusion
Writing algorithms becomes easier when you treat it as a design workflow, not a typing exercise. Use constraints, invariants, pseudocode, and layered testing to reduce mistakes and improve speed.
The biggest improvement is not learning one more trick. It is using a stable process every time.
Comments