1
$\begingroup$

Suppose that each row of an $n \times n$ array $A$ consists of 1's and 0's such that, in any row of A, all the 1's come before any 0's in that row. Assuming $A$ is already in memory, describe a method running in O($n$) time for finding the row of $A$ that contains the most 1's.

I don't understand how this is possible if the array was not already sorted. How can you possibly do this in O($n$) time? What if every element in the array was a 1? Then any algorithm I could think of would run in O($n^2$).

  • 0
    Perhaps better off here: http://scicomp.stackexchange.com/ ?2012-01-16
  • 0
    @DanielFreedman: I looked at the site and it doesn't seem to have any questions about asymptotics.2012-01-16
  • 2
    If every element in the array was $1$ you can already terminate after looking at the first row because there can't be more than $n$ ones.2012-01-16

1 Answers 1

7

Hint: Once you've seen a $1$ in column $k$, you never have to look at columns $\le k$ in any row anymore. You can move on to column $k+1$. In each step, if you see a $0$, you can exclude that row, and if you see a $1$ you can move on to the next column. Thus there can be at most $2n$ steps. I didn't fill in all the details because this is homework; feel free to ask if it's too vague.

  • 0
    Oh wow that is so simple. But I don't see where you got 2n from. Also, what would happen if every element is a 0?2012-01-16
  • 0
    very nice and tricky2012-01-16
  • 0
    @styfle: The $2n$ comes from the fact that there are $n$ rows and $n$ columns and each step excludes either a row or a column, so after at most $n+n=2n$ steps there's nothing left to exclude. If every element is a $0$, you just look at all the $0$s in the first column and then pick any of the rows. (That was part of the detail I'd left out: You start in the first column.)2012-01-16
  • 0
    To me it seems like you could do it in n steps instead of 2n steps. Since the outer loop would go from 1 to n (for each column) and inside: if A[row][col]=1, mark as answer, else, row = row + 1. The next iteration of the loop would then check the next column in the row with most ones (current answer). Does that make sense?2012-01-17
  • 0
    @styfle: I don't think so. If you increment the column in every step, even if the step increments the row, then you can miss a better row. For instance, what would your algorithm do if the only $1$s are one in the first row and two in the last row? I think it would miss the last row and give the first row as the answer?2012-01-17
  • 0
    It would go through every element in the first row since all of them are 1's. Then it would be done and return the first row as the answer. If there are ties, it prefers the row closest to the top (just like with all zeros).2012-01-17
  • 0
    @styfle: That's a misunderstanding. By "if the only $1$s are one in the first row and two in the last row", I meant that there is exactly one $1$ in the first row and there are exactly two $1$s in the last row.2012-01-17
  • 0
    Oh I see what you're saying. It has to stay on the k+1 column to check rows that are further down. You're right.2012-01-17