In-place algorithm for interleaving an array

Question Detail: 

You are given an array of $2n$ elements

$$a_1, a_2, \dots, a_n, b_1, b_2, \dots b_n$$

The task is to interleave the array, using an in-place algorithm such that the resulting array looks like

$$b_1, a_1, b_2, a_2, \dots , b_n, a_n$$

If the in-place requirement wasn't there, we could easily create a new array and copy elements giving an $\mathcal{O}(n)$ time algorithm.

With the in-place requirement, a divide and conquer algorithm bumps up the algorithm to be $\theta(n \log n)$.

So the question is:

Is there an $\mathcal{O}(n)$ time algorithm, which is also in-place?

(Note: You can assume the uniform cost WORD RAM model, so in-place translates to $\mathcal{O}(1)$ space restriction).

Asked By : Aryabhata
Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/332

Answered By : Aryabhata

Here is the answer which elaborates upon the algorithm from the paper linked by Joe: http://arxiv.org/abs/0805.1598

First let us consider a $\Theta(n \log n)$ algorithm which uses divide and conquer.

1) Divide and Conquer

We are given

$$a_1, a_2, \dots , b_1, b_2, \dots b_n$$

Now to use divide and conquer, for some $m = \theta(n)$, we try to get the array

$$ [a_1, a_2, \dots , a_m, b_1, b_2, \dots, b_m], [a_{m+1}, \dots, a_n, b_{m+1}, \dots b_n]$$

and recurse.

Notice that the portion $$ b_1 , b_2, \dots b_m, a_{m+1}, \dots a_n$$ is a cyclic shift of

$$ a_{m+1}, \dots a_n, b_1 , \dots b_m$$

by $m$ places.

This is a classic and can be done in-place by three reversals and in $\mathcal{O}(n)$ time.

Thus the divide and conquer gives you a $\theta(n \log n)$ algorithm, with a recursion similar to $T(n) = 2T(n/2) + \theta(n)$.

2) Permutation Cycles

Now, another approach to the problem is the consider the permutation as a set of disjoint cycles.

The permutation is given by (assuming starting at $1$)

$$ j \mapsto 2j \mod 2n+1$$

If we somehow knew exactly what the cycles were, using constant extra space, we could realize the permutation by picking an element $A$, determine where that element goes (using the above formula), put the element in the target location into temporary space, put the element $A$ into that target location and continue along the cycle. Once we are done with one cycle we move onto an element of the next cycle and follow that cycle and so on.

This would give us an $\mathcal{O}(n)$ time algorithm, but it assumes that we "somehow knew what the exact cycles were" and trying to do this book-keeping within the $\mathcal{O}(1)$ space limitation is what makes this problem hard.

This is where the paper uses number theory.

It can be shown that, in the case when $2n + 1 = 3^k$, the elements at positions $1$, $3, 3^2, \dots, 3^{k-1}$ are in different cycles and every cycle contains an element at the position $3^m, m \ge 0$.

This uses the fact that $2$ is a generator of $(\mathbb{Z}/3^k)^*$.

Thus when $2n+1 = 3^k$, the follow the cycle approach gives us an $\mathcal{O}(n)$ time algorithm, as for each cycle, we know exactly where to begin: powers of $3$ (including $1$) (those can be computed in $\mathcal{O}(1)$ space).

3) Final Algorithm

Now we combine the above two: Divide and Conquer + Permutation Cycles.

We do a divide and conquer, but pick $m$ so that $2m+1$ is a power of $3$ and $m = \theta(n)$.

So instead on recursing on both "halves", we recurse on only one and do $\theta(n)$ extra work.

This gives us the recurrence $T(n) = T(cn) + \theta(n)$ (for some $0 \lt c \lt 1$) and thus gives us an $\mathcal{O}(n)$ time, $\mathcal{O}(1)$ space algorithm!

No comments

Powered by Blogger.