Peter G. Anderson - List Of My Publications
The complete list
Using the J Language for Neural Net Experiments
Training Wheels for Encoder Networks
Multidimensional Golden Means
Using Quasi-Random Numbers in Neural Networks
Linear Pixel Shuffling for Image Processing, an Introduction
The Polynomial Method Augmented by Supervised Training for Hand-Printed Character Recognition
Genetic Algorithm Selection of Features
for Handwritten Character Identification
Typewriter Keyboards via Simulated Annealing
Ordered Greed
Ordered Greed, II: Graph Coloring
Advances in Ordered Greed
Error Diffusion Using Linear Pixel Shuffling
Linear Pixel Shuffling (I): New Paradigms for New Printers
Advances in Linear Pixel Shuffling
Error Diffusion and Edge Enhancement: Raster Versus Omni-Directional Processing
The Unit RBF Network: Experiments and Preliminary Results
Lectures on Genetic Algorithms
Color Uniformity and Moire in Dispersed Dot Halftone Masks Generated by Linear Pixel Shuffling
J Language Lectures
A Genetic Algorithm Search for Improved Halftone Masks
Optimizing Halftone Masks with Genetic Algorithms and a Printer Model
Good Halftone Masks via Genetic Algorithms
A Genetic Algorithm Search for Improved Halftone Masks
Compressible Halftoning
Combinatorial Proofs of Fermat's, Lucas's, and Wilson's Theorems
Convolutions Combinatorially
Multidimensional Zeckendorf Representations
Tiling Rectangles, Cylinders, and Mobius Strips
Every Positive k-Bonacci-Like Sequence Eventually Agrees with a Row of the k-Zeckendorf Array
Patterns in Differences Between Rows in k-Zeckendorf Arrays
Peter G. Anderson,
Using the J Language for Neural Net Experiments,
December, 1995
We introduce the programming language J
and show its applicability for experimenting
with neural networks and genetic algorithms.
We illustrate the use of J with complete
programs for perceptron and back propagation
learning.
For a compressed Post Script version of this paper,
click here.
Peter G. Anderson,
Training Wheels for Encoder Networks,
December, 1995
We develop a new approach to training encoder feed-forward neural
networks and apply it to two classes of problems.
Our approach is to initially train the network with a related,
relatively easy-to-learn problem, and then gradually replace the
training set with harder problems, until the network learns the problem
we originally intended to solve.
The problems we address are modifications of the common N-2-N encoder
network problem with N exemplars, the unit vectors, e[k] in N-space.
Our first modification of the problem is to use objects consisting of
paired 1's, e[k]+e[k+1], with subscripts taken mod N. This requires an
N-2-N net to organize the images of the exemplars in 2-space ordered
around a circle.
Our second modification is to use patterns consisting of two objects;
each object is a pair of adjacent 1's; the objects must be separated
from each other. This problem can be learned by a N-4-N network which
must organize the images of the exemplars in 4-space in the form of a
mobius strip.
The easy-to-learn problem in both cases involves replacing the two-ones
signal e[k]+e[k+1] with a block-signal of length B:
e[k]+e[k+1]+...+e[k+B-1].
In several cases, our method
allowed us to train networks that otherwise fail to train. In some
other cases, our method proved to be ten times faster than otherwise.
For a compressed Post Script version of this paper,
click here.
Peter G. Anderson,
Multidimensional Golden Means,
March, 1993
We investigate a geometric construction which yields
periodic continued fractions and generalize it to
higher dimensions. The simplest of these constructions
yields a number which we call a two (or higher)
dimensional golden mean, since it appears as a limit
of ratios of a generalized Fibonacci sequence.
Multiples of these golden points, considered
"mod 1" (i.e., points on a torus) prove to be good
probes for applications such as Monte Carlo integration
and image processing.
For a compressed Post Script version of this paper,
click here.
Peter G. Anderson,
Linear Pixel Shuffling for Image Processing, an Introduction,
June, 1993
We investigate a method of ordering pixels (the elements of
a rectangular matrix) based on an arithmetic progression
with wraparound(modular arithmetic). For appropriate
choices of the progression's parameters based on a
generalization of Fibonacci numbers and the golden mean,
we find equidistributed collections of pixels formed by
subintervals of the pixel progression or "shuffle." We
illustrate this equidistributivity with a novel approach
to progressive rendering of a synthetic image, and we
suggest several opportunities for its application to other
areas of image processing.
For a compressed Post Script version of this paper,
click here.
For a PDF version of this paper,
click here.
Peter G. Anderson,
Using Quasi-Random Numbers in Neural Networks
March, 1995
We present a novel training algorithm for a feed
forward neural network with a single hidden layer
of nodes (i.e., two layers of connection
weights).
Our algorithm is capable of training networks for hard
problems, such as the classic two-spirals problem.
The weights in the first layer are determined
using a quasirandom number generator. These
weights are frozen---they are never
modified during the training process.
The second layer of weights is trained as a
simple linear discriminator using methods such as
the pseudo-inverse, with possible iterations.
We also study the problem of reducing the hidden layer:
pruning low-weight nodes and a genetic algorithm search
for good subsets.
For a Post Script version of this paper,
click here.
For a pdf version of this paper,
click here.
Peter G. Anderson and Roger S. Gaborski, Kodak,
The Polynomial Method Augmented by Supervised Training for Hand-Printed Character Recognition
Presented at The International
Conference on Artificial Neural Networks &
Genetic Algorithms, ANNGA 93,
Innsbruck, Austria,
April 1993.
We present a pattern recognition
algorithm for hand-printed characters, based on a
combination of the classical least squares method
and a neural-network-type supervised training
algorithm. Characters are mapped, nonlinearly,
to feature vectors using selected quadratic
polynomials of the given pixels. We use a method
for extracting an equidistributed subsample of
all possible quadratic features.
This method creates pattern classifiers with
accuracy competitive to feed-forward systems
trained using back propagation; however back
propagation training takes longer by a factor of
ten to fifty. (This makes our system
particularly attractive for experimentation with
other forms of feature representation, other
character sets, etc.)
The resulting classifier runs much faster in use
than the back propagation trained systems,
because all arithmetic is done using bit and
integer operations.
For a PDF version of this paper, click here.
Peter G. Anderson and Christopher Asbury, RIT,
Roger S. Gaborski and David G. Tilley, Kodak
Genetic Algorithm Selection of Features
for Handwritten Character Identification,
Presented at The International
Conference on Artificial Neural Networks
Genetic Algorithms, ANNGA 93,
Innsbruck, Austria, April 1993.
We present a pattern recognition
We have constructed a linear discriminator for
hand-printed character recognition that uses a
(binary) vector of 1,500 features based on an
equidistributed collection of products of pixel
pairs. This classifier is competitive with other
techniques, but faster to train and to run for
classification.
However, the 1,500-member feature set clearly
contains many redundant (overlapping or useless)
members, and a significantly smaller set would be
very desirable (e.g., for faster training, a
faster and smaller application program, and a
smaller system suitable for hardware
implementation). A system using the small set of
features should also be better at generalization,
since fewer features are less likely to allow a
system to memorize noise in the training data.
Several approaches to using a genetic algorithm
to search for effective small subsets of features
have been tried, and we have successfully derived
a 300-element set of features and built a
classifier whose performance is as good on our
training and testing set as the system using the
full set.
For a Post Script version of this paper,
click here.
For a pdf version of this paper,
click here.
Lissa W. Light and Peter G. Anderson
Typewriter Keyboards via Simulated Annealing,
Printed in AI Expert, September, 1993
We apply the simulated annealing algorithm
to the combinatorial optimization problem
of typewriter keyboard design, yielding
nearly optimal key-placements using a
figure of merit based on English letter
pair frequencies and finger travel-times.
Our keyboards are demonstrably superior to
both the ubiquitous QWERTY keyboard and the
less common Dvorak keyboard.
The paper is constructed as follows: first
we discuss the historical background of
keyboard design; this includes August
Dvorak's work, and a figure-of-merit
(scalar) metric for keyboards. We discuss
a theory of keyboard designs: why keyboard
design is a combinatorial problem, how
combinatorial problems are typically
solved, what is simulated annealing, and
why it is especially suitable for the
problem at hand. Next we discussed the
results, and compare the keyboards produced
by simulated annealing to QWERTY and
Dvorak's keyboard. Finally, we suggest
some future lines of inquiry.
For a compressed Post Script version of this paper,
click here.
Peter G. Anderson and William T. Gustafson
Ordered Greed,
Scheduling problems are among the most challenging
and realistic problems application of problem solving
heuristics, such as genetic algorithms (GAs). The
naive greedy algorithm for scheduling simply assigns,
in turn, each item to be scheduled the best yet
untaken position for that item. We investigate using
a genetic algorithm to search the space of orderings
for this greedy algorithm. That is, the GA
individuals are permutations that determine the
permutations that are the schedules, rather than the
GA individuals directly being the schedules.
We have experimented with the classical N Queens
problem and a realistic soccer tournament scheduling
problem, comparing the GA individual as the assignment
with our greedy hybrid algorithm (ordered greed).
Warnsdorff's heuristic is introduced to modify
blind greed with excellent results.
We also introduce the use of signatures in our GAs
to represent permutations. Signatures are easy to
create and manipulate in crossover and mutation
operations.
For a PDF version of this paper,
click here.
For a Postscript version of this paper,
click here.
Peter G. Anderson
Ordered Greed, II: Graph Coloring
Presented at the ICSC/NAISO Conference, Information Science
Innovations (ISI 2001),
Dubai, UAE, March, 2001.
A popular application of genetic algorithms (GAs) is to
attempt to generate good, rapid, approximate solutions to
NP-complete or NP-hard problems.
Previously, we introduced a hybrid algorithm combining
a GA with simple greedy algorithms applied to the N-Queens
problem and to sports tournament scheduling. The greedy
algorithm makes locally optimal assignments (to Queens or
matches) in some order. We treat that ordering as the
sought after goal, and thus work with a population whose
individuals are permutations.
The subject of the present paper is the problem of
graph coloring. We focus the present paper on the single
benchmark problem of coloring a three-colorable graph that
was constructed as a subgraph of the complete 3-partite
graph K{p,q,r} in which each edge exists with probability
0.1. (We have applied our method successfully to several
other categories of graphs, but present space limitations
dictate presenting the results for this special case.)
For a PDF version of this paper,
click here.
For a Postscript version of this paper,
click here.
Peter G. Anderson
Error Diffusion Using Linear Pixel Shuffling
Presented at the IS&T Conference,
PICS2000, Portland, OR, March, 2000.
Linear pixel shuffling error diffusion is a digital
halftoning algorithm that combines the linear pixel
shuffling (LPS) order of visiting pixels in an image
with diffusion of quantization errors in all
directions.
LPS uses a simple linear rule to produce a pixel
ordering giving a smooth, uniform probing of the
image. This paper elucidates that algorithm.
Like the Floyd-Steinberg algorithm, LPS error
diffusion enhances edges and retains high-frequency
image information. LPS error diffusion avoids some of
the artifacts (worms, tears, and
checkerboarding) often associated with the
Floyd-Steinberg algorithm. LPS error diffusion
requires the entire image be available in memory; the
Floyd-Steinberg algorithm requires storage
proportional only to a single scan line.
For a Postscript version of this paper,
click here.
For an html version of this paper,
click here.
Peter G. Anderson, Jonathan S. Arney, and Kevin Ayer
Linear Pixel Shuffling (I):
New Paradigms for New Printers
Presented at the IS&T Conference,
on Non-Impact Printing, Vancouver, 2001
Linear Pixel Shuffling (LPS) is a novel order for
image pixel processing which provides opportunities
for construction of dot placement algorithms for
high-resolution printers through micro-clumping and
the formation of pseudo clustered dots.
We present the details of LPS, how to program using
it, several of its properties and applications,
especially for electrophotographic (EP) imaging.
For a Postscript version of this paper,
click here.
For a PDF version of this paper,
click here.
Peter G. Anderson
Advances in Linear Pixel Shuffling
Presented at
the Fibonacci Association conference,
Summer 1992,
St. Andrews, Scotland
Given an interval or a higher dimensional
block of points, that may be either continuous
or discrete, how can we probe
that set in a smooth manner, visiting all its
regions without slighting some and overprobing
others? The method should be easy to program, to
understand, and to run efficiently.
We investigate a method of
visiting the pixels (the elements of a
rectangular matrix) and the points in the real
unit cube based on an arithmetic progression with
wrap-around (modular arithmetic). For
appropriate choices of parameters, choices that
generalize Fibonacci numbers and the golden mean,
we find equidistributed collections of pixels or
points, respectively.
We illustrate this equidistributivity with a
novel approach to progressive rendering of
digital images. We also suggest several
opportunities for its application to other areas
of image processing and computing.
For a Postscript version of this paper,
click here.
For a PDF version of this paper,
click here.
Peter G. Anderson
Linear Pixel Shuffling for Image Processing, an Introduction
This article appears in The Journal of Electronic Imaging,
April, 1993, pp. 147-154.
We present a method of ordering pixels (the
elements of a rectangular matrix) based on an
arithmetic progression with wrap-around (modular
arithmetic). For appropriate choices of the
progression's parameters based on a generalization of
Fibonacci numbers and the golden mean, we achieve
uniformly distributed collections of pixels formed by
intervals of the pixel progression or shuffle.
We illustrate this uniformity with a novel
approach to progressive rendering of a synthetic
image, and we note several opportunities for
applications to other areas of image processing.
For a Postscript version of this paper,
click here.
For a PDF version of this paper,
click here.
Jonathan S. Arney, Peter G. Anderson, and Sunadi Ganawan
Error Diffusion and Edge Enhancement:
Raster Versus Omni-Directional Processing
Paper present at the Western NY Conference
on Image Processing, Rochester, NY, September 9, 2001.
Error diffusion is a well known technique for
generating bi-level images and is often used instead
of a halftone screen process in order to minimize the
visual impact of quantization error. In addition, an
alternative to raster image processing, involving a
linear algorithm called linear pixel shuffling, was
used to perform error diffusion. This allows the use
of symmetrical kernels and the omni-directional
diffusion of quantization error. Edge enhancement
with omni-directional error diffusion was examined and
found to be capable of more directionally uniform
enhancement of edges than is possible with raster
error diffusion.
For a Postscript version of this paper, click here.
Peter G. Anderson
The Unit RBF Network: Experiments and Preliminary Results
URBF, the
unit radial basis function
network is an RBF neural network
with all second layer weights set to
+/- 1. The URBF models functions or
physical phenomena by sampling their
behaviors at various probe points,
and correcting the model, more and
more delicately
(i.e., using Gaussian functions with ever narrower
spread), when discrepancies are
discovered. The probe points---input
space positions to test and adjust
the network---are linear pixel
shuffling points, used for their
highly uniform sampling property.
We demonstrate the network's
performance on several examples. It
shows its power via good extrapolation
behavior: for smooth-boundary
discriminations, very few new hidden units
need to be added for a large number of
probe points.
For a pdf version of this paper,
click here.
Peter G. Anderson
Lectures on Genetic Algorithms
An introduction to emulating the problem solving
according to nature's method:
via evolution,
selective breeding, ``survival of the fittest.''
We will present the fundamental algorithms
and present several examples---especially some problems that are
hard to solve otherwise.
A directory containing several pdf slides.
Peter G. Anderson
Color Uniformity and Moire in Dispersed Dot Halftone Masks Generated by Linear Pixel Shuffling
We investigate a method for combining two
dispersed-dot halftoning masks for two colors
that generalizes the traditional angular
displacement used with clustered-dot halftone
screens. Our masks were two variations of
the Linear Pixel Shuffling algebraic mask
algorithm involving numbers arising from two
different
third order, Fibonacci-like recurrences, As an
alternative to screen rotation, these masks were
repositioned by flipping and rotating, and all
pairwise positional combinations of the
transposed, inverted masks were used two at a
time to generate cyan and magenta images, each at
nominal dot fractions of 0.25. The resulting
blue images were evaluated for color uniformity
and moire using visual evaluations, Fourier
analysis, and a color micro-variation analysis.
This presentation will describe the analytical
techniques used to evaluate color uniformity and
moire.
For a Postscript version of this paper,
click here.
For a PDF version of the visuals
click here.
Peter G. Anderson, Jonathan S. Arney, Samuel A. Inverso,
Daniel R. Kunkle, Timothy M. Lebo, and Chadd Merrigan
A Genetic Algorithm Search for Improved Halftone Masks
We present a genetic algorithm that automatically
generates halftone masks optimized for use in specific
printing systems. The search is guided by a single
figure of merit based on a detailed model of the
printing process and the human visual system. Our
experiments show that genetic algorithms are effective
in finding improved halftone masks and that two
methods of reducing the search space to particular
subsets of possible halftone masks greatly enhance the
performance of the GA.
For a pdf version of this paper,
click here.
Peter G. Anderson, Jonathan S. Arney, Samuel A. Inverso,
Daniel R. Kunkle, Timothy M. Lebo, and Chadd Merrigan
Good Halftone Masks via Genetic Algorithms
Presented at the 2003 Western New York Image Processing
Workshop, Rochester, NY, October 17, 2003.
We present a genetic algorithm that automatically
generates halftone masks optimized for use in specific
printing systems. The search is guided by a single
figure of merit based on a model of the
printing process and the human visual system.
Our
experiments show that genetic algorithms are effective
in finding improved halftone masks and that two
methods of reducing the search space to particular
subsets of possible halftone masks greatly enhance
the search performance.
For a pdf version of this paper,
click here.
Peter G. Anderson, Jonathan S. Arney, Samuel A. Inverso,
Daniel R. Kunkle, Timothy M. Lebo, and Chadd Merrigan
A Genetic Algorithm Search for Improved Halftone Masks
Presented at the ANNIE Conference, November 2003, St. Louis, MO.
We present a genetic algorithm that automatically
generates halftone masks optimized for use in specific
printing systems. The search is guided by a single
figure of merit based on a detailed model of the
printing process and the human visual system. Our
experiments show that genetic algorithms are effective
in finding improved halftone masks and that two
methods of reducing the search space to particular
subsets of possible halftone masks greatly enhance
the performance of the GA.
For a pdf version of this paper,
click here.
For the pdf slides for this presentation
click here.
Peter G. Anderson and Jonathan S. Arney
Optimizing Halftone Masks with Genetic Algorithms and a Printer Model
Techniques are described for determining optimum
halftone masks for electrophotographic laser printers.
Earlier techniques were based primarily on up-shifting
the noise power of the halftone pattern in order to
minimize visual granularity. This strategy worked
well with low addressable printers of the 1980s and
early '90s. However, current addressability of EP
printers is so high that too much up-shifting
introduces printer instabilities that can introduce
different kinds of granularity and/or mottle. Thus,
the optimum halftone mask for image quality should
have noise power concentrated within at frequency band
between the edge of human vision and the edge of
printer stability. The authors have developed search
techniques for examining permutations of halftone
masks of size HxW. The techniques are based on a
strategy called a genetic algorithm (GA).
Peter G. Anderson
J Language Lectures
This is a collection of overheads introducing the J language
with many examples and explanations.
For a pdf version of these visuals, click here.
Peter G. Anderson and Changmeng Liu
Compressible Halftoning
We present a technique for producing an bi-level,
halftoned imaged from a gray-scale image in such a way
that the halftoned image is compressible.
The technique involves sorting the halftoned image
using the halftone mask as a sort key, then
using a Huffman code to represent groups of
the rearranged pixels.
We get compression ratios of 3.5--10 for
halftoned photographs.
This talk was presented at Electronic Imaging,
Santa Clara, January 2003. However, material on multilevel
halftone compression was added later.
For a pdf version of this paper,
click here.
Peter G. Anderson, Arthur T. Benjamin, and Jeremy A. Rouse
Combinatorial Proofs of Fermat's, Lucas's, and Wilson's Theorems
In this note, we observe that many classical theorems
from number theory are simple consequences of a
simple combinatorial lemma.
To appear in the American Mathematical Monthly
For a pdf version of this paper,
click here.
Peter G. Anderson and Marjorie Bicknell-Johnson
Multidimensional Zeckendorf Representations
We generalize Zeckendorf's Theorem to represent points in
Zk-1, uniquely, as sums
of elements of order-k linear recurrences.
Presented at the Conference of the Fibonacci Association,
July 2010, Morelia, Mexico.
Accepted for publication in the Fibonacci Quarterly.
For a pdf version of this paper,
click here.
Peter G. Anderson
Tiling Rectangles, Cylinders, and Mobius Strips
We present a method for determining the number of ways
various two-dimensional grids can be tiled using several
small tile repertoires.
Presented at the Conference of the Fibonacci Association,
July 2010, Morelia, Mexico.
For a pdf version of this paper,
click here.
Peter G. Anderson
Convoutions Combinatorially
Inspired by Benjamin and Quinn's combinatorial approach
to the Fibonacci numbers, etc.
(Proofs that Really Count,
The Art of Combinatorial Proof),
we observe that the
Fibonacci convolution sequence has a conceptually simple
combinatorial interpretation which leads immediately to
a fourth-order recurrence and other properties.
We also find a combinatorial nearly proof-without-words
for the convolution formula for the difference between
the Pell and Fibonacci sequences as well as differences
for other linear recurrences.
This uses the observation that
the number of ways to tile an n-board with squares and
dominoes is a Fibonacci number; in case the squares
come in two colors, it's a Pell number.
Finally, a consideration of the placement of the second
color in tiling an n-board gives the Pell numbers
as another convolution of the Fibonacci numbers.
Presentation at the Midwest Conference
on Combinatorics, Cryptography, and Computing,
Rochester, New York, October 2004
For the PDF slides,
click here.
Peter G. Anderson and Daniel Ashlock (Iowa State University)
Advances in Ordered Greed
Ordered Greed is a form of genetic algorithm that uses a population of
permutations whose fitnesses depend on their use as the orders in
which parts of a problem are solved. For example, a permutation may
specify the order of the rows in which chess-board queens are placed
to try to avoid attacks by other queens, or it may specify the order
in which vertices of a graph are colored to avoid adjacent vertices
getting the same color. The problems mentioned are surrogates for
practical, difficult, real-life problems such as scheduling.
Ordered greed requires its own form of crossover. We have developed
and investigate two new crossover operators. The first uses the
standard combinatorial mathematics notion of permutations' signatures,
which are a list of numbers that specify a permutation, but for which
the usual bit-string notions of substring-swapping crossovers apply.
The second form of crossover works with permutations directly, merging
two parents, as in a riffle shuffle, and extracting two children from
the merged list consisting of the lists of first or second instances
of each value. We compare
the new methods with each other and with the well-known PMX
The application for ordered greed that we have chosen
for this investigation is that of coloring Hamiltonian
planar
graphs (i.e., map graphs), which are well-known to be four colorable.
An abbreviated version of this paper was
accepted by the ANNIE Conference, St. Louis, Nov. 2004.
For a pdf version of this paper, click here.
For the visuals, click here.
Peter G. Anderson and Curtis Cooper (University of Central Missouri)
Every Positive k-Bonacci-Like Sequence Eventually Agrees with a Row of the k-Zeckendorf Array
For k >= 2, a fixed integer, we work with the k-bonacci sequence, {X_n}, a k-th
order generalization of the Fibonacci numbers, and their use in a Zeckendorf representation
of positive integers. We extend Zeckendorf representations using {X_n | n \in Z} and show that
every sequence of positive integers satisfying the k-bonacci recurrence eventually agrees with
a row of the k-Zeckendorf array.
The Fibonacci Quarterly, 2011, volume 49, no. 4, pp. 303-309.
For a pdf version of this paper, click here.
Peter G. Anderson and Larry Ericksen
Patterns in Differences Between Rows in $k$-Zeckendorf Arrays
For a fixed integer k >= 2,
we study the k-Zeckendorf array, X_k, based upon the
k-th order recurrence u_n = u_{n-1} + u_{n-k}.
We prove that the pattern of differences
between successive rows
is a k-letter infinite word generalizing the infinite Fibonacci word.
Accepted for publication in the Fibonacci Quarterly, 2011
For a pdf version of this paper, click here.
Back to Peter Anderson's Home Page