0
$\begingroup$

Asking this question on SO, I have been advised to post it here. I will be using Javascript to implement :

Please consider a row of size 12. On that row, I want to place some items that have 3 different width : 1, 2 and 3.

I would like to get all the possible combination of items on that row such that the sum of their width is equal to the row size.

For example, I could have :

[1,1,1,1,1,1,1,1,1,1,1,1] (12 items of size 1)

[2,2,2,2,2,2] (6 items of size 2)

[3,3,3,3] (4 items of size 3)

[3,3,3,1,1,1] (3 items of size 3 + 3 items of size 1)

[2,2,1,1,3,3] (2 items of size 2 + 2 items of size 1 + 2 items of size 3)

  • 0
    @Gerr Myerson. I am trying to tile object of different width on rows on a screen. Similar to : http://images.google.com/search?q=tree&biw=1440&bih=828&sei=oQvjT5OAGIm40QHIuZCjAw&tbm=isch Except I set 3 ratio for the images !2012-06-21

1 Answers 1

1

The following Haskell code generates all possible solutions where the row is of width $n$. It works on a simple principle - for the base cases ($n=0$, $n=1$ and $n=2$) we write out the answer explicitly. Otherwise (if $n>2$) we can put a block of width 1, 2 or 3 in the first position, and then we run the algorithm recursively on the remaining space.

Translate to Javascript at your leisure.

getCombinations :: Int -> [[Int]] getCombinations 0 = [[]] getCombinations 1 = [[1]] getCombinations 2 = [[1,1], [2]] getCombinations n = map (1:) (getCombinations (n-1)) ++                     map (2:) (getCombinations (n-2)) ++                     map (3:) (getCombinations (n-3)) 

Note that this is inefficient for large $n$. In fact, it is exponentially bad. For example, computing the answer for n = 12 computes the answer for n = 9, 10 and 11 in the process. Computing the answer for n = 11 also computes n = 8, 9 and 10, so we have duplication.

You could improve the efficiency by writing a memoizing algorithm, which stores the answer for each $n$ in a map when it is first computed, and looks up subsequent calls with the same n in the map rather than recomputing them.

  • 0
    @ThomasAndrews Good point!2012-06-21