"Big O" is a way of describing the efficiency of algorithms. For starters, here are a couple of good links:

A good general description of O(1), O(n), O(n^2), and O(log n)

A good table showing implications for certain sized arrays

But basically, when you are designing algorithms (particularly those which deal with arrays), you need to be aware of the implications on the efficiency of those algorithms of increasing the size of the array that you are working with. In some cases the efficiency, or speed with which the computer can process the algorithm, isn't influenced much by the size of the array, but in some cases it makes a really big difference. Defining efficiency in terms of "Big O" gernally lets the programmer know what the **worst case scenerio** of efficiency issues will be when **really large arrays** are used.

Here are the four Big O situations you should be aware of at this point:

O(1)- the algorithm takes the same amount of time no matter how big an array taken in as a parameter.

If it takes 3 miliseconds to do with an array of 3 elements, it takes 3 milliseconds to do so with an array of 3000 elements.

So, very simply put, a typical Big O(1) algorithm is one which does not loop through an array taken in.

O(n)- the length of time the algorithm takes is directly proportional to the size of the array taken in as a parameter.

If it takes 3 milliseconds to do with an array of 3 elements, it takes 3000 milliseconds to do so with an array of 3000 elements.

So, very simply put, a typical Big O(n) algorithms is one which loops through an array (which takes time the bigger the array).

Classic example:linear search.

O(n^2)- the length of time the algorithm takes is proportional to the square of the size of the array taken in as a parameter..

If it takes (3 x 3 = ) 9 miliseconds to do with an array of 3 elements, it takes (3000 x 3000 = ) 9,000,000 milliseconds to do with an array of 3000 elements.

So, very simply put, a typical Big O(n^2) algorithm is one which has a loop within a loop looping through an array - exponentially slower as the size of the array taken in increases.

Classic example:bubble sort.

O(log n)- the length of time the algorithm takes is proportional to the log of the size of the array.

If it takes 0.47 milliseconds to do with an array of 3 elements, it takes only 3.47 seconds to do with 3000 elements taken in.

So, very simply put, a typical Big O(log n) algorithm is one which somehow splits up the job again and again into halves, eliminating the need to deal the other halves as it goes.

Classic example:binary search.

So, the moral of the story is that you should avoid loops within loops, and learn to like algorithms which work in a binary way.

O(n^2), or even worse, a loop within a loop within a loop O(n^3) is **bad** for working with large arrays.

O(log n) is **good** for working with large arrays.

__Here's another way to compare the main four Big O situations:__

O(1): 10 times the size of the array, the

same timeto do something.

O(n) :10 timesthe size of the array,10 timesas long to do something.

O(n^2): 10 times the size of the array,100 timesas long to do something.

O(log n): 10 times the size of the array, around3 more step onlyneeded to do something.

One other point, mathematically, is that sometimes the expression of the Big O of a particular algorithm can get quite complex, if you wanted to be really exact. Take the following two for example:

Example 1: O(n^3) + n

Example 2: O(n) + 1

In both of the cases above, the expression may exactly describe the situation, but really, in each example the second terms doesn't mean that much in the grand scheme of things. Keep in mind we're using Big O to generally communicate things about algorithm efficiencies. As the size of an array goes up, is its efficiency "good", "really good", "bad", or "really bad", or "really, really bad"; that's what we want to know - for our purposes, we don't need an exact numeric efficiency measure.

In the first example above, adding on ** n** to describe the efficiency really doesn't make that much differnce in the end; and the larger the array gets, the more insignificant the effect of adding on a single

So in the case of complex expressions of Big O, we tend to ignore insignificant terms in the expression, and simplify to the most significant term only.

Example 1 continued: O(n^3) + n -----> *becomes* -----> O(n^3)

Example 2 continued: O(n) + 1 ------> *becomes* -----> O(n)

And one final point. Note that Big O is **only **even considered when we are talking about **large arrays**. If an array is just 100 elements for example, it's not a big deal if we use a sequential search or a binary search for example; modern computers will have an easy time with it in either case. But as arrays (and, in fact other data structures we'll look at later) get big, Big O becomes important. In fact, due to exponential and logarithmic effects, Big O gets increasingly important with the increase in size.