5
$\begingroup$

(Link to refenced image)

I'm looking into the patterns used to position "padfeet" on a padfoot soil compactor. The point of the feet is to gain an increased ground pressure, the downside is that as you roll over an area you only cover a certain percent of the ground, in this case 15% each time.(I've already calculated average % you will hit with each successive pass"

The pattern that is typically used is the chevron(bottom right, spaces removed). What is done is that the operator will roll forward and back over and area, and then turn around 180 degrees and repeat up to 6-8 total passes. The reversing is done to purposely decrease the chance of overlapping the "V"'s.

My question is, I've considered changing this to either the gull wing(bottom left and top with accurate spacing) or a even more scattered pattern such as putting row 5 as the 3rd impact, thus making a double chevron. Looking at this, I visually perceive this to be a "more random" pattern, but is there a way quantify or define a difference between the changing patterns when they are both "equal" in their coverage. Or is it just in my head.


Thinking a bit more about it: What I've calculated previously (with some assistance) is the % coverage from each pass"In general if the compactor affects p fraction of the area each time, after n passes it will have covered 1−(1−p)n of the area. So to cover 90% in 8 passes, you'll need just about 25% coverage on each pass, as 1−(1−.25)8≈90%."

The purpose of the 180 turn is to better mimic this calcuation, because you don't have a unique pattern hit with every pass, and if you overlap while just driving forwards and back you're likely to overlap with all 9 rows. It seems like introducing a change to the pattern going around the drum, so that the pattern does not repeat identically for all 15 chevrons around the circumference, but adjusts slightly so that even when an overlap takes place it takes place to a lesser percent.
That still doesn't tell me if/how the shape of "each" pattern matters or not though.

2 Answers 2

1

The notion of being less and more random appears here and there, particular in design of random number generators.

Here is a nice pdf on that (legally available chapter of O'Reily book: http://www.johndcook.com/Beautiful_Testing_ch10.pdf)

Very basic idea is:

consider a random number generator returning a pseudorandom series of number from $(0,1)$. If this series consisted of truly random numbers $x_n$ then:

  • distribution of $x_n $ would be uniform on (0,1)
  • distribution of any subsequence $x_{k_n}$ would also be uniform (provided we predefine $k_n$, in particular we don't make it $x_n$-dependent)
  • for any (continuous) function $f:(0,1)\to\mathbb{R}$ it would be easy to determine what is the distribution of $f(x_n)$

if the above conditions are well satisfied by a pseudorandom sequence it may be considered random. In the PDF more advanced approach are mentioned (specific statistical tests).

0

I don't think "random" is the term you ask for. There are measures (informal sense) that represent the level of randomness, the most important being Kolmogorov complexity. However, this measures pure information content, and I guess what you would like more is a way to assess adherence to some well-established patterns.

What I would suggest is that you should define some set of patterns you would like to use and then train some classifier to recognize them. Example of such process would be any OCR software. The harder the picture it is to "read off", the more random the pattern is. Some simple scheme might involve random sensor-segments (e.g. that means if the segment crosses any black pixel, it has value 1 and 0 otherwise; there are many possibilities here, you need to tweak it to your problem) scattered on the area of interest and you put the resulting vector into classifier (random forests, svm, neural networks, whatever). Using this you can devise your own "randomness" measure (note, this is not true mathematical randomness) and take into account anything you wish (depending on what you want it might be easy or hard or even impossible).

This is just a general idea, but I hope it'll help ;-)

  • 0
    @DeWeave Have you tried to have two (or more?) patterns joint in a way that only some of them can overlap at once? Like having $A_1$ and $A_2$ such that if $A_1$ overlaps with slightly shifted second pass $A_1$ then $A_2$ from the first pass cannot overlap with $A_2$ from the second pass. My first try would be patterns which have different base frequencies.2012-12-20