# Two Topics in Neural Networks

Peter G. Anderson

Department of Computer Science

Rochester Institute of Technology

pga@cs.rit.edu

## Part I: Using the J Language for Neural Net Experiments

We introduce the programming language **J**
and show its applicability for experimenting
with neural networks and genetic algorithms.
We illustrate the use of **J** with complete
programs for perceptron and back propagation learning.

## Part II: Training Wheels for Encoder Networks

We develop a new approach to training encoder feedforward
neural networks and apply it to two classes of problems.
Our approach is to initially train the
network with a related, relatively easy-to-learn
problem, and then gradually replace the
training set with harder problems, until the
network learns the problem we originally intended to solve.

The problems we address are modifications of
the common *N-2-N* encoder network problem with *N*
exemplars, the unit vectors, **e_k** in *N*-space.
Our first modification of the problem is to
use objects consisting of **paired** 1's
(**e_k** + **e_(k+1)**), with subscripts
taken *mod N*). This requires an
*N-2-N* net to organize the images of the
exemplars in 2-space **ordered** around a circle.
Our second modification is to use patterns
consisting of **two** objects; each object
is a pair of adjacent 1's; the objects must
be separated from each other. This problem
can be learned by a *N-4-N* network which
must organize the images of the exemplars in
4-space in the form of a **mobius strip**.

The easy-to-learn problem in both cases
involves replacing the the two-ones signal
**e_k** + **e_(k+1)** with a
block-signal of length **B**:
**e**_k + **e**_(k+1) + ... + **e**_(k+B-1).
In several cases, our method allowed
us to train networks that otherwise fail to
train. In some other cases, our method proved
to be ten times faster than otherwise.

Colloquia Series page.