Photo by NeONBRAND on Unsplash

In this post we will explore the difference between a data structure and algorithm. We’ll look at an cooking example from the real world where we implicitly used a “data structure” and an “algorithm”.

Perhaps you’ve wondered can I find the function representing the runtime of a program by executing the program many times with different input size, and plotting the graph ? What does Big-Oh notation give us ?

Finally, given a program with 3 separate for loops, going from 1 to n, where n is the input size. We’ll learn about time complexity.

Then we’ll examine two algorithms (or programs), A and B, have the following runtime: n2/4, and 2n log2n respectively and figure out which one is faster? If the input size is n=16, which one should you use ?

From my background as a physicist, I see algorithms as transformations and data structures as function arguments. For example, in physics we could apply a transformation to a set of spherical coordinates or maybe it’s better to use rectangular coordinates. It varies from problem to problem. In my opinion this is similar to whether we pass data in as linked list or an array. Some algorithms might require us to know the exact index of our data. In that case we might want to use an array. Another one might require us to work with the head or tail of our data. In that case we might want to use a linked list.

I am a very task driven person. From my point of view, it is easiest for me to organize my tasks in the form of an associative array. First, I list out all of my major tasks by priority. Then I create sub tasks within these and sometimes even sub tasks within those. Eventually, I have this onion like structure where if I go to the index “exercise”, I have another associative array consisting of “morning workout”, “cardio workout”, and “evening workout”. If I traverse these sub indexes I can list out all of the exercises that I know how to do.

In the case of my daily task array I could test the running time of my traversal algorithm by increasing the size of my array and recording how long the traversal takes. Then if I plot the results I can develop an approximate function that produces a running time value for a given input size. That is because the running time is depends on the input size. The problem with this though is that I have only proven out an experimental running time. For other people to get the same results they will need to execute the algorithm on the same exact hardware with the same exact software. It would be difficult for other people to get the same results.

If I were to psuedo code my algorithm it would be something like:

taskList( taskArray)

If I were to analyze this theoretical implementation I would see that I have 2ops get the size, n ops to traverse the list. This has n + 2 ops, but in big-Oh. It is O(n) running time. Big-Oh- is a way to describe the upper bound of the worst case running time for our algorithm.

If we had a program that consisted of 3 separate loops going from 1 to n. Then each one would contribute exactly n operations. This means that our program would consist of 3n operations. The time complexity of the algorithm would be O(n).

Let say that we evaluate the algorithms for an input of size 16. Seems like the second one takes longer, but eventually n2/4 is greater than 2n log2n. For input size 16, n2/4 is 64 and 2n log2n is 128. The running time depends on the input size. So for this particular case I would recommend using the n2/4 for an input size of 16. However, for input sizes larger than 43 I would recommend using 2n log2n.

I like building technology. That’s pretty much what I live for. http://www.evan-gertis.com/