## 12.2.How will we solve these problems?

1. The way we will get around (1) is to do our measurements analytically, not experimentally. In other words we will analyze the algorithm -- or the relevant aspects of a program implementing it -- in order to figure out its efficiency and the factors that affect efficiency.

Most commonly, we measure one of the following:

```--   the number of additions,multiplications etc.(for numerical algorithms).
--   the number of comparisons (for searching, sorting)
--   the number of data moves (assignment statements,maybe also parameters).
--   the amount of memory required.
```

How long does it take to add, muliply, divide, compare two numbers on a typical workstation?

A test program:

``` 1      #ifndef LIMIT
2      #       define LIMIT    500000000
3      #endif
4
5      int main()      {
6      int i, a, x;
7              scanf("%d", &x);
8
9              for ( i = 1; i < LIMIT; i ++ )  {
10      #ifdef          ASSIGN
11                      a = x;
12      #endif
14                      a= x + 343454;
15      #endif
16      #ifdef          MUL
17                      a = x * 343454;
18      #endif
19      #ifdef          DIV
20                      a = x * 343454;
21      #endif
22      #ifdef          COMPARE
23                      a = x < 343454;
24      #endif
25                      ;
26              }
27              printf("Limit: %d\n", LIMIT);
28              exit (0);
29      }
```

The result for a sparc 20 with SUN OS 5.4 is the following. The program was compiled with the gcc version 2.7.2:

```% echo 2 | time ./for
Limit: 500000000
real     1:43.9
user     1:24.4
sys         0.0
```

The for-loop needed: 84.4 seconds.

```% echo 2 | time ./assign
Limit: 500000000
real     2:25.4
user     1:48.4
sys         0.1
```

The 500000000 assignments needed 20 seconds. (One assignment needs .000000040 seconds.)

```% echo 2 | time ./add
Limit: 500000000
real     2:21.1
user     2:06.3
sys         0.0
```

The 500000000 additions needed 126.3 - 104.4 = 21.9 seconds. (One addition needs .000000043 seconds.)

```% echo 2 | time ./mul
Limit: 500000000
real     3:25.4
user     3:03.4
sys         0.0
```

The 500000000 multiplications needed 183.4 - 104.4 = 79 seconds. (One multiplication needs .000000158 seconds.)

```% echo 2 | time ./div
Limit: 500000000
real     3:14.7
user     2:56.3
sys         0.0
```

The 500000000 divisions needed 176.3 - 104.4 = 71.9 seconds. (One division needs .000000143 seconds.)

```% echo 2 | time ./cmp
Limit: 500000000
real     2:43.3
user     2:16.7
sys         0.1
```

The 500000000 compare operations needed 136.7 - 104.4 = 32.3 seconds. (One compare operation needs .000000064 seconds.)

A Java program:

``` 1
2      class ForJavaOnly {
3              public static void main (String argv[]) {               // main program
4              int LIMIT = 500000000;
5              int i, a, x;
6
7              x = 3;
8
9              System.out.println("Limit: " + LIMIT);
10              }
11      }
```

The startup time is:

```% time java ForJavaOnly
Limit: 500000000

real        5.7
user        0.3
sys         0.5
```

The assignments:

``` 1
2      class For {
3              public static void main (String argv[]) {               // main program
4              int LIMIT = 500000000;
5              int i, a, x;
6
7              x = 3;
8
9              for ( i = 1; i < LIMIT; i ++ )
10                      a = x;
11
12              System.out.println("Limit: " + LIMIT);
13              }
14      }
```

```% time java For
Limit: 500000000
real    21:13.5
user    15:53.5
sys         0.7
```

The 500000000 assignments needed 15 * 60 + 53.5 = 953.5 seconds. (One assignment needs .0000019070 seconds.)

If you compare this with the C program you will see, we need a .0000019070 / .000000040 = 47 times faster computer. Wait a 'few weeks' and we will have one.

2. The way we will get around (2) is to express efficiency (or whatever we choose to measure in (1)) as a function of the input... Efficiency(algorithm A) = a function F of some property of A's input.

Definition 9.1 (O(n) -- Big O notation)

Let n natural number, t: N IR and f: N IR.

f(n) ~ 0(t(n)),
if it exist an M and C so that f(n) M * t(n) + C for n   N and M, C IR

With Big-O notation, we are strictly concerned with the dominant term, low-order terms and constant coefficients will be ignored.

Example:

```Function                       Complexity
--------------------------------------------------------
f(n) = 3 * n + 3               f(n) ~= O(n)
f(n) =  42 * n * n - 42        f(n) ~= O( @ n sup 2 @ )
f(n) =  24 * log(n) * n - 42   f(n) ~= O( n * log(n) )
f(n) =  @ 2 sup n @ - 4711     f(n) ~= O( @ 2 sup n @ )
```

We discuss the complexity in class!!!