I am a self-taught programmer. I didn't have the opportunity to go to any fancy schools. I did get accepted, but I never had the money for it. In the long run, I don't think it hurt me. But this does have some interesting side effects on how I learn, or sometimes the lack of the opportunity to attend a fancy school shows its ugly head.
Since I am a self-taught programmer, I feel I can learn anything. All I need is an internet connection, and whether it is fixing my car, taking care of my yard, training my dogs, or learning a new computer programming topic, I can learn at my own pace. But if you stick me in a classroom and take me through a boring 5-day lecture, I couldn't do it. My apologies to anyone who has sat through such lectures I've conducted.
But one particular thing that bugs me is how all the cool kids have lately started to talk in this thing called the “Big O notation”. Here I am scrolling through a computer programming discussion, or submit a PR, only to have another developer leave a comment on my PR saying, “This won't scale,” and then writing something cryptic about O(n) mumbo jumbo.
The first time I saw that, Big O looked like some terrifying high-level math designed to keep people out of the “cool kids” club of software engineering. But here's the secret: it's a shorthand for common sense. It's just a way to describe how much “slower” or “heavier” an algorithm gets as you feed it more data.
Think of it as a way to predict the future. If your app works fine with 10 users, Big O tells you whether it will crash and burn when you hit 10,000.
I thought it would be good to write down my understanding of Big O in an article, so we can all be part of that cool kids club.
The Two Pillars: Time and Space
In programming, you have two pillars you have to balance between. Time and space. Think of these as costs of running your code.
When we talk of time complexity, we aren't measuring this in seconds because a supercomputer is faster than your laptop. Instead, we measure the number of operations the CPU has to perform. Does the work grow linearly with the data, or does it explode exponentially?
When we talk of space complexity we are talking about memory (RAM). Does your function need to create a massive new list to process the old one, or can it do the work “in place”?
Think of it like cooking a massive dinner. Time complexity is how many hours you're standing at the stove; space complexity is how many pots, pans, and bowls you end up cluttering the kitchen with. Sometimes you save time by using more pans, and sometimes you save pans by taking more time. In coding, we call this the space-time tradeoff.
This space-time tradeoff is described using the Big O notation. Let's understand this first.
Various Kinds of Big O
Figure 1 shows the various Big O notations. That diagram may not make a lot of sense just yet, but let's strip away the math and look at that chart like a “stress meter” for your computer.
Imagine you are a librarian, and someone asks you to find a specific book. The horizontal line (x-axis) is how many books are in the library, and the vertical line (y-axis) is how much sweat is dripping off your forehead because of how long it's taking.
The “Superhuman” (O(1) - Constant)
The flat line at the bottom. This is the dream. No matter if the library has 10 books or 10 million books, you can find the answer instantly.
For example, you have a list of people, and I ask, “Who is the very first person in line?” It doesn't matter if the line is a mile long; you just look at the first person. Done. And if I ask you, "Who is the 98th person in line?". Well, you are superhuman, so you are infinitely fast. You can answer about the 98th person just as fast as the first. Your time taken is “constant.”
The “Speedy Reader” (O(logn) - Logarithmic)
The line that goes up a little then stays pretty low. This is very fast. Every time the library doubles in size, it only takes you one extra step to find your book. If the library quadruples in size, it doesn't take you 4x as long. It takes you just slightly longer. As the library becomes infinite in size, you've somehow mastered this art of scaling, and the next incremental addition in size doesn't really affect you much.
For example, consider a phone book. If you're looking for “Smith,” you open to the middle. Is “Smith” in the first half or the second half of the book? You just threw away half the book in one move! You keep halving it until you're done. Even if the book had a billion pages, you'd find the name in about 30 flips. You are a speedy reader.
The “Regular Joe” (O(n) - Linear)
You are an average joe, given an average problem, with no easy obvious way to scale it. The steady diagonal line. This is fair and predictable. If there are twice as many books, it takes twice as long. Not much you can do about that.
For example, you may have to check every book one by one to find a hidden $20 bill. If there are 100 books, you check 100 times. If there are 200, you check 200 times. If there are 300 ... you get the idea. It'll take 3x as long as 100 books, at an average. You are scaling, linearly with input data size.
The “Struggler” (O(nlogn) - Linearithmic)
Here the line starts to curve upward. This is usually the cost of sorting things. It's not terrible, but it's definitely more work than just looking through a list once. It's like checking every book but having to do a little bit of “phone book” logic for each one.
For example, if you're looking for a hidden $20 bill in one of 100 books, but you have to do a push-up after checking each book. Well, 100 pushups later, I bet you won't be flipping pages as fast as the first one. Your line is curving upwards.
The “Panic Zone” (O(n^2) - Quadratic)
The line that starts shooting toward the sky. This is where things get bad. If the library doubles in size, your work quadruples.
For example, I ask you to check if any two books in the library are identical. You take book #1 and compare it to every other book. Then you take book #2 and compare it to every other book. If you have 1,000 books, you're doing 1,000,000 comparisons. Your computer is going to sound like an airplane taking off, fans whirring at full blast.
The “I Quit” (O(2^n) - Exponential)
The vertical line that goes straight up. This is the nightmare. Adding just one more book doubles the time it takes. If you have 100 books, the sun will literally burn out before you finish the task. You basically never want to see this in your code unless you're trying to crack a password or solve a crazy puzzle.
For example, if I ask you to do a reversal of a one-way hash. That would take an inordinate amount of time, practically infinite especially as the input gets larger. For instance, I mix up three colors, give you the mix and ask you to separate the individual colors and put them back in their tubes. You get the idea.

A Real-World Example
I find it easy to learn new concepts with real-world examples, so let's do that.
Imagine you're building a social media app. You have an array of userIDs, and for some reason, the database glitched and might have given you duplicates. You need to write a function that returns true if there's a duplicate and false if every ID is unique.
You can solve this problem in numerous ways. I am going to solve this problem in three different ways, starting with brute force, and then optimizing it, and explain Big O complexity as I go.
The Brute Force Method
A simple obvious way to solve this problem is to use brute force. In this version, we take the first ID and compare it to every other ID. Then we take the second ID and compare it to every other ID. It's thorough, but it's slow. You can see the logic in Listing 1.
Listing 1: The brute force method
function hasDuplicatesBruteForce(arr) {
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j]) {
console.log(`Found duplicate: ${arr[i]}`);
return true;
}
}
}
return false;
}
As you can see, Listing 1 has two nested loops. I pick each element one by one then compare it against every other element in the array to check for duplicates. If I find a match, I flag it.
You can use this code as shown below.
const userIds = [102, 501, 304, 102, 999];
console.log("Brute Force Result:", hasDuplicatesBruteForce(userIds));
This code example has a time complexity of O(n^2): This is quadratic time. If you have 10 IDs, you might do 100 comparisons. If you have 1,000 IDs, you're doing 1,000,000 operations. This will make your app feel laggy very fast.
However, this code example has a space complexity of O(1). This is constant space. We didn't create any new arrays or objects. We just used a couple of loop variables. No matter how much our input data size increases, our memory allocation only increases linearly to the size of the input.
Now while it is clear that this logic works nicely and is bug free, also it is easy to understand, it won't work in real life. A senior dev on your team might retort on your PR saying, “This does not scale.”
Okay back to the drawing board for a better solution.
The Sorting Trade Off
Anyone that has worked with me has heard me say this: In engineering there are no right decisions, only tradeoffs.
What if we sort the list first? If the IDs are sorted (e.g., [1, 2, 2, 3]), any duplicates will be sitting right next to each other. Can you think of how this might help our time and space complexity for our solution?
Let's look at Listing 2. We first sort the array. We do so in a copy, so we don't affect the original. Then, we iterate through the list just once, and duplicates will appear next to each other. Our code is slightly less obvious now that we are comparing numbers side by side to find duplicates. The reader will have to understand that the list is sorted and why we sorted it in the first place, so variable naming is important. You can imagine that in more complex scenarios, this could get out of hand fast. We also have to consider edge cases—for instance, I'm only iterating till i < sortedArr.length - 1 to avoid going out of bounds. This is a zero-based array, so I wish to check length - 1 and length - 2 and no further.
Listing 2: The sorted list method
function hasDuplicatesWithSort(arr) {
// We copy the array so we don't mutate the original
const sortedArr = [...arr].sort((a, b) => a - b);
for (let i = 0; i < sortedArr.length - 1; i++) {
if (sortedArr[i] === sortedArr[i + 1]) {
console.log(`Found duplicate: ${sortedArr[i]}`);
return true;
}
}
return false;
}
But on the plus side, the time complexity is now O(n log n). Most modern sorting algorithms are very efficient.
Timsort is a combination of merge sort and insertion sort. Think of it this way: Imagine you are sorting a massive pile of playing cards. Instead of throwing them all in a pile and starting from scratch, you do this:
Identify Runs: You scan through the cards and notice, “Hey, these five cards are already in order!” Timsort calls these ordered chunks Runs.
Fix Small Chunks (InsertionSort): If a chunk of cards is too small (usually fewer than 32 or 64 cards), you quickly put them in order using InsertionSort. This is like how you'd sort a hand of cards while playing—it's super-fast for small groups.
Merge the Giants (Merge Sort): Once you have several large, sorted chunks, you combine them using merge sort. This is very efficient for joining two big, sorted lists into one massive, sorted list.
The “Galloping” Trick: This is Timsort's secret weapon. If it's merging two lists and notices that every card in List A is smaller than the first card in List B, it stops comparing them one by one. It just “gallops” ahead and moves the whole chunk of List A into the final pile at once.
As you can see, by using Timsort, our time complexity is O(n log n), which is the speedy reader equivalent. This is much faster than the brute force method for large datasets.
On the other hand, the space complexity is a bit worse. It is now O(n) because we created a copy of the array to sort it and we used extra memory proportional to the number of items.
Using Sets
We have greatly improved our algorithm to find duplicates, but can we do even better? You can use a specific data structure called a set. A set is a special data structure that only allows unique values and has near-instant lookup times. Almost every major programming language has an equivalent.
To understand why sets are so fast, we have to look at how they store data under the hood. Most developers think of a set as just “an array that doesn't allow duplicates,” but the way they find items is completely different. While an array is like a list of names on a piece of paper, a set is like a wall of numbered lockers. The secret sauce sets use is hashing.
To understand hashing, you must stop thinking like a person looking through a list and start thinking like a valet driver.
If you go to a restaurant and give the valet your car, they don't just park it in the first open spot and then “search” every car in the lot when you come back. Instead, they give you a ticket number. When you return, they look at the number, go exactly to that spot, and grab your car.
Hashing is the process of turning your data (the car) into a ticket number (the hash).
When you want to put something into a set—let's say the string Apple—the computer doesn't just put it in the next available slot. Instead, it runs Apple through a hash function. This function turns Apple into a specific number (an address), like 502. It puts Apple into locker 502. Now when asked, “Does Apple exist in our set?” it has to recalculate the hash and simply see if locker 502 is empty or filled. It doesn't have to go through locker 1, 2, 3 ... 501, 502, to find if Apple is filled or not.
Let's look at Listing 3 to understand how using sets greatly improve our program.
Listing 3: Using Sets
function hasDuplicatesOptimized(arr) {
const seen = new Set();
for (const id of arr) {
if (seen.has(id)) {
console.log(`Found duplicate: ${id}`);
return true;
}
seen.add(id);
}
return false;
}
The code in Listing 3 creates a new set. Then we iterate through the items, and we keep adding them into the set. If we have “seen” that item before, the set should already contain that id. This tells us a duplicate is present.
This code has significant advantages.
Its time complexity is O(n). This is linear time. If the array has 1,000 items, we check 1,000 items. Period. It scales beautifully. The time taken (remember this means compute time not seconds) is linear to the size of the input. In fact, add some multi-threading and parallelization concepts, and you can run this on multiple threads in parallel to reduce the actual time in seconds.
The space complexity of this solution is O(n). We are trading memory for speed. In the worst case (no duplicates), we end up storing every single ID inside our seen set. But it is slightly better than a sorted array since a sorted array will immediately allocate all the memory equal to 2x the size of the input array, no matter if we have duplicates or not.
Code Patterns
Now that we understand some basic examples of code and their equivalent Big O complexities, let's put our reviewer hat on. What are the code patterns you'd look for to deduce what kind of time or space complexity any given code uses? Before I elaborate on this, it is important to understand that real-world code is complex, and frequently Big O complexities are near approximations. You'd generally look at a code sample and lean towards the closest pattern it fits. For instance, if you see both O(2^n) and O(1), the exponential O(2^n) dominates and becomes the overall complexity—the O(1) operations become negligible by comparison. Similarly, O(n log n) complexity isn't always immediately obvious because the actual size of the dataset (n) or the logarithmic behavior may not be easily apparent from the code snippet you might be looking at.
That said, there are code patterns you can look at and internalize to understand the time and space complexity of any given code example. Here is what you look for.
The “Single Loop” (O(n))
This is your standard, everyday code. It's like walking down a line of people and giving everyone a high-five.
function giveHighFives(people) {
// One loop = O(n)
for (let i = 0; i < people.length; i++) {
console.log("High five, " + people[i] + "!");
}
}
As you can see, your complexity increases linearly with inputs. If there are 10 people, you do 10 high-fives. If there are 1,000 people, you do 1,000. It's fair.
The “Nested Loop” O(n^2)
Sometimes you need nested loops, but anytime I see one, it is a bit of a red flag. I always wonder if there is a way to improve code with a nested loop. This is like walking down a line of people, but for every single person you meet, you stop and introduce them to every other person in the line.
function introduceEveryone(people) {
// Outer loop
for (let i = 0; i < people.length; i++) {
// Inner loop (The Danger Zone!)
for (let j = 0; j < people.length; j++) {
console.log(people[i] + ", meet " + people[j]);
}
}
}
As you can see above, every time I stop to shake hands with a person, I make sure to introduce that person to every other person in the room. Can there be a better way? Maybe just shake hands, and introduce amongst yourselves in parallel?
You really have to be careful with such code. Some programmers will argue that I have only 3 people, so it's not a big deal. But the number “3” is a variable, and as your codebase gets more complex, n could easily grow to 100 without realizing you've created this massive “everyone introduce themselves to everyone else” scenario.
The Hidden Nested Loop
Look at this code below.
function findUser(users, targetName) {
for (let i = 0; i < users.length; i++) {
// .includes() is actually a hidden loop!
// It has to look through the whole string/array internally.
if (users[i].tags.includes(targetName)) {
return users[i];
}
}
}
This code is deceptively hiding an inside built in loop with a helper function. Even though you only see one for loop, because .includes() has to scan an array, you've accidentally created O(n^2) behavior.
With the introduction of fluent api and lambda functions, it is far too easy for developers now to create hidden nested loops. Let's see a more real-world example in Listing 4. This is what I call the complexity bomb.
Listing 4: A complex deceptive hidden nested loops example
const users = [
{ id: 1, name: "Alice", preferredCategories: ["electronics", "books"] },
{ id: 2, name: "Bob", preferredCategories: ["garden", "books"] }
// ... imagine 1,000 users
];
const products = [
{ id: 101, name: "Kindle", category: "books" },
{ id: 102, name: "Shovel", category: "garden" }
// ... imagine 10,000 products
];
/**
* GOAL: Create a list for each user containing all products
* that match their preferences.
*/
function getRecommendations(users, products) {
// Loop 1: .map() goes through every user (O(n))
return users.map(user => {
// Loop 2: .filter() goes through every product (O(m))
const matches = products.filter(product => {
// Loop 3: .includes() goes through the user's categories (O(k))
// This is a HIDDEN loop!
return user.preferredCategories.includes(product.category);
});
return {
userName: user.name,
recommendations: matches
};
});
}
const results = getRecommendations(users, products);
console.log("Recommendations calculated for", results.length, "users.");
Imagine you are building a product recommendation engine. You have a list of users, and each user has a list of preferredCategories. You also have a giant list of products. You want to find every product that matches at least one category for every single user.
At first glance, this code looks “professional” because it uses fancy functional methods like .map() and .filter(). But under the hood, this is a complexity bomb. In fact, this is a hidden O(n^3) complexity bomb.
Let's understand why.
The first loop (.map): We visit every user.
The second loop (.filter): Inside every user, we scan every product.
And finally, the third loop (.includes): Inside every product check, we scan the user's category list.
The math gets scary fast. If you have 1,000 users, 10,000 products, and each user has 5 categories, 1,000 * 10,000 * 5 = 50,000,000 operations!
Ouch!
This is where your user's browser will likely show the Page Unresponsive popup because the CPU is stuck doing fifty million comparisons just to show some recommendations.
Sadly, I see far too many professional programmers learning these fancy knives and getting cut in the process. Some days I feel like all these fancy new language features are like handing an AK47 to a monkey.
So How Do We Fix It?
Before you read further, take a moment to think about how you'd fix this yourself. Try and get to an O(n) solution if you can.
Easy! We can use the hashing trick we learned earlier to “flatten” this. Instead of asking the user's list “Do you include this?” over and over, we convert the user's categories into a set.
You can see this in action in Listing 5. The logic is quite simple. Instead of iterating over every single possibility, we first pre-group products by category one time, giving us a complexity of O(m). In the real world, you have put products neatly into categorized bins. Remember the whole hashing/valet thing I had talked of? Now that things are in bins, it's easy to find a bin when needed.
Listing 5: A better O(n) way to achieve the same result
function getRecommendationsOptimized(users, products) {
// Step 1: Pre-group products by category once O(m)
// This is like putting products into categorized bins first.
const productMap = new Map();
for (const product of products) {
if (!productMap.has(product.category)) {
productMap.set(product.category, []);
}
productMap.get(product.category).push(product);
}
// Step 2: Now we just look into the bins O(n * k)
return users.map(user => {
let recommendations = [];
// We only loop through the user's few categories,
// NOT the whole product list!
user.preferredCategories.forEach(cat => {
const matches = productMap.get(cat) || [];
recommendations = [...recommendations, ...matches];
});
return {
userName: user.name,
recommendations
};
});
}
So next, we just look in the bins O(n*k), and in doing so, we only need to look at a user's few categories.
The result? By using a map (hashing), we went from 50 million operations down to roughly 15,000. It's the same result, but the optimized version is 3,300 times faster.
Which of These Is the Right Solution?
In modern development, memory (space) is often cheaper than time. Users will definitely notice if their screen freezes for 5 seconds while a O(n^2) algorithm chugs along, but they probably won't notice if your app uses an extra 2MB of RAM to hold a set. Generally, your goal as a developer is to get your time complexity down to O(n) or O(log n) whenever possible, even if it means using a bit more memory. But certainly, this depends on the situation. What if you were operating this program on an IoT device where time didn't matter but memory mattered a lot more? Or imagine a situation where you're offloading your work to a GPU or TPU, which is optimized to run a specific program extremely fast, so your space complexity matters less. In this scenario, memory is extremely fast but very limited, so time complexity matters more.
This is why mastering these concepts is paramount. So, the cool kids that had the opportunity to attend a fancy school, maybe that have something going right for them. Well at least they have a common language to describe scale and complexity issues. And now, you too are part of that cool kids club.
Until next time, happy coding.



