In this short chapter we will look at the events that shaped software development as we know it today, and try to get a feel for what it means to write software.
The idea of using a machine to do work that we consider mental rather than physical has been with us for a long time. Mechanical calculators go back at least to the 17th century (and much much further if one includes the abacus) with separate inventions by Shickard, Pascal, and Oughtred (he invented the infamous slide rule). In the first year of the 19th century, Jacquard created a loom, programmed with punched cards. This was a different kind of machine; the machine itself did physical work, but the cards stored the mental plan of how the work was to be done. That is, the machine could be adapted to perform different tasks based on a stored program. Soon afterwards, Babbage created his more sophisticated calculating machine which he called the Difference Engine. But then in the 1830's he began to make plans to build what many believe to be the first programmable computer -- the Analytical Engine. In fact, he shared his work with Ada Lovelace, who in turn wrote programs for this proposed machine. Lovelace is widely considered to be the first computer programmer. Unfortunately, Babbage never built his machine, and so the programs remained notes on paper.
What separates pre-20th century work in computing with that done starting in the 20th century is the technology on which the computing mechanism is based. Everything up to this point was built out of parts that moved, mechanically, to produce the result. At the time of the writing of this book, most automobiles still use this type of technology on their odometers -- the transmission spins a cable that connects to some carefully designed gear assemblies so that the wheel containing the least significant digit is always turning, but each of the wheels to the left only turns when the wheel to its right changes from 9 to 0, as if a mechanical "signal" is sent at that instant from the right wheel to the left one! Even in the mid 20th century, mechanical devices such as these were quite common in calculators ("adding machines") and cash registers, but they have now all but disappeared.
What happened in the 20th century was the growing use of electricity in computing machines. Zuse invented what might now be seen as a transitional machine, as it was based on relays (a relay uses electrical power to close a mechanical switch). But more importantly, it illustrated the usefulness of logic by having all its computations being done in binary, or base 2. This is because relays only have two states, On and Off, which can be interpreted as True and False, or 0 and 1.
What we now consider the modern computer first appeared during World War II. These machines used pure electronics (first vacuum tubes and later transistors) to represent information and to perform computations. There were several key people at this point so I will not mention any specific names.
However, the programmers of those machines did not do something at all like what we do today. They would instruct the computer to perform specific operations by connecting jumper wires on plug boards in specific patterns, and then attaching these plug boards to the main machine. It took people like von Neumann to point out that, since computers can store data in their memory, then we should also be able to encode our instructions into some form of data, and store those in memory, too. All that is then needed is a "permanently wired" module in the computer that reads these memory locations, interprets the code, and performs the operations that they specify! These coded instructions are what we call a program.
To be sure, most readers of this book will probably not consider this interpretation of stored programs a simple thing to design, but believe it or not, most computer science and computer engineering students have learned how to do this by their third year of college!
So, what did we have at this point? A computer that has a significant amount of memory, some of which gets loaded up with programs. These programs are hand-coded by the predecessors in our profession, placed on some kind of medium like tape or cards, and fed into the computer for execution. In the 1950's, people like Backus and Hopper made this job much easier by inventing high-level programming languages so that the cryptic machine code was created automatically from human-written programs that read more like the sentences in natural language (but not much more).
As far as the specification of individual computer instructions go, not much has changed in the last half century. For fun, let's look at a piece of program code. Of course, you are not yet expected to be able to decipher exactly what is going on here; that is one of the goals of this book! The goal of each of these little code fragments is to provide instructions on how to add up ten numbers that are stored in a little piece of memory called an array (much like a one-dimensional matrix). The array will be called numbers, and the sum will be put in a memory location called sum. We will not show any input or output, to keep it simple.
First, here it is in Backus's language, FORTRAN (Formula Translator), that first appeared in 1956:
sum = 0 do 101 i=1,10 101 sum = sum + numbers( i )
Next, here it is in Meyer'slanguage, Eiffel, first appearing in 1986:
from sum := 0 i := 1 until i > 10 loop sum := sum + numbers.item( i ) i := i + 1 end
Finally, here is how the code would look in Gosling's 1991 language, Java (which, for this small pice of code, is identical to Stroustrup's 1986 language, C++, and Ritchie's 1972 language, C):
for ( sum=0,i=1; i<=10; i++ ) {
sum = sum + numbers[ i ];
}
Although the reader cannot be expected to fully comprehend the workings of these code fragments, each reader might decide that he or she prefers one language over another based on certain personal criteria. What is far more interesting is that no language yet allows you to say:
Add the elements of array numbers together, and put the answer in sum.
(What is interesting is that Hopper's 1961 language, COBOL, comes the closest!)
So what has been happening during the last 50 years? One answer is that language designers, and more generally, software development method experts, have been concentrating on structure:
In this book, we shall study software development techniques based on one of the most popular set of answers to these questions. These answers are collectively known as Object Technology, and a method, language, or program that exhibits object technology is considered Object-Oriented. We'll start looking at that in the next chapter. For now, let's make some general observations about software, and how it relates to accepted notions about engineering.
Engineering has traditionally been considered as the process of using the research done in the sciences to design and build useful things. Certainly, when we build software we are doing design, but what is the underlying science? Well, the quick answer is computer science, but that does not really tell us anything. What kind of science is it? If it is the study of computers, then that is a very strange kind of science, since computers are not a mysterious, naturally occurring phenomenon! They are machines built by other engineers!
The answer most often given is that software engineering is built on the
"science" of computing, which is really a branch of mathematics. Computing
is in turn based on mathematical fields such as logic, information theory,
algebra, and number theory. The important thing you should realize here is
that it is not based on any physical phenomena!
How does this impact the notion of engineering? Well, when a traditional
engineer designs a system, like a bridge, electrical circuit, or a mechanical
machine, she or he must go through steps something like this:
Of course, most engineers know that it is almost impossible to get it right
the first time, so the last step forces you to go back to the earlier steps
many times, but that is not the main point.
When a software developer designs a system, there is a small but significant
difference:
What does the difference in that third step imply? That what software folks call "implementation" is just a very precise design specification, written using a language that the computer can understand. So, we end up with a dilemma: where is the end product? Well, just as software engineering is not based on anything in the physical universe, neither are the software products found there! The end product of a software development is a very detailed, highly structured set of intellectual ideas, plans, etc. In order to make them useful, we usually write them in "computer code", which can be in any of the forms we discussed earler, and many others as well.
You might say, "No, since the program is loaded into a computer, then the computer is the end product." Unfortunately, the computer was designed and built according to an electrical engineer's specifications. "Well, you said earlier that the program is stored in memory. That means we build the memory to run the program, so the memory is the end product." Again, the memory was designed by electrical or microelectronics engineers.
Think of it this way. If a program fails because one of the memory chips has gone bad, is that a bad software design, or a bad computer hardware design? To whom would you want to speak to try to keep that unwanted error from happening again? I think you would agree that the software engineer would not be culpable here.
To conclude, you are about to embark on a study of a very unusual discipline: that of designing products that you will never be able to hold in your hand, stand on, or even smash in frustration! All you can do is pass them on to a very fast, but simple-minded machine called a computer, and see how good a job you did in telling that simpleton what it is supposed to do. I hope that you will feel the same elation I have whenever that simple machine finally does what you intended it to do, and you see the fruits of all your work realized in a very unique way!
© 1997 James Heliotis