# java - time complexity for iterating through an array list

I have an array list which I iterate through. In each iteration I call `get()` to get one element, and if that item passes some condition, it is added to a new array list using `add()`

``List<Item> items = new ArrayList<Item>();List<Item> lessItems = new ArrayList<Item>();for(int index = 0; index < items.size(); index++){Item toCheck = items.get(i);if(toCheck meets some condition){lessItems.add(toCheck);}}``

I am not sure what the time complexity is here. I am calling get() on all items so that is O(n). Then I am also calling add() on potentially all the items so there's another O(n). Not too sure on this one.

All other answers are wrong here.

1. Your first loop for iterating `items` list: complexity is `O(n)`
2. Insert each item to end of list `lessItems`: in normal array it will be `O(n)` as others said. But Java implements for `ArrayList` using amortized array. This means when inserting at the end of array, algorithm only costs `Amortized O(1)`. Or you can say `O(1)`

So complexity of your code is: `O(n) * amortized O(1)`. In short is `O(n)`

Another reference:

dynamic array

If complexity of inserting at the end of array is `O(N)`, so the total complexity is `O(N^2)`, not O(2*N) as other answers said. Because the total complexity for inserting would be: `1 + 2 + 3 + ...+ n = n*(n+1)/2`

as official document states:

The size, isEmpty, get, set, iterator, and listIterator operations run in constant time. The add operation runs in amortized constant time, that is, adding n elements requires O(n) time. All of the other operations run in linear time (roughly speaking). The constant factor is low compared to that for the LinkedList implementation.

Here is the logic of `grow` method that I have taken from official java source code:

``````private void ensureExplicitCapacity(int minCapacity) {
modCount++;

// overflow-conscious code
if (minCapacity - elementData.length > 0)
grow(minCapacity);
}

private void grow(int minCapacity) {
// overflow-conscious code
int oldCapacity = elementData.length;
int newCapacity = oldCapacity + (oldCapacity >> 1);
if (newCapacity - minCapacity < 0)
newCapacity = minCapacity;
if (newCapacity - MAX_ARRAY_SIZE > 0)
newCapacity = hugeCapacity(minCapacity);
// minCapacity is usually close to size, so this is a win:
elementData = Arrays.copyOf(elementData, newCapacity);
}
``````

As the source code has said, when program add element that make size of array larger than current capacity, Array will be grown. new size of grown array will be:

``````int newCapacity = oldCapacity + (oldCapacity >> 1);
``````

This is a trick that make inserting is `amortized O(1)`

You are doing an iteration, and that's of O(n).

You're also adding items to an ArrayList, which is of O(1) (Amortized)

Getting an index is also O(1).

So your doing O(n) times, operations of O(1), which is going to be of O(n).

Big-O and similar notations are asymptotic bounds on time complexity. They discard the numeric coefficient and are used to estimate run-time as a function of size of input.

So, `2*n`, `3*n`,etc. are represented as `O(n)`, and `2*nlog(n)`, `3*nlog(n)`,etc. are represented as `O(nlog(n))`.

Since the add() operation inserts only one element in this case, its runtime is approximately, `(some small constant)k*1`, giving the total runtime as `(some constant)j*(n+some constant(k))`, in other words `j*n`, or `O(n)`.

In this case and all such similar cases, any constant k multiplied by n would be represented as `O(n)` meaning the runtime varies linearly with size of the input ArrayList.

You are right, this is O(n * 2), which is the same as O(n).