FAQ for Programming Assignment 1

Information will be added here are questions are generated and answered.

  1. There are several examples provided with M2MI. Which should we focus on?

    Well, the parallel particles example found in /home/adjunct/anhinga/public_html/m2miapi20040302/lib/edu/rit/parallel/particles is the most appropriate of the M2MI examples, but please see the answer to the next question. In particular, I believe it would behoove us to develop an interface comparable to the DoubleMatrix stuff in the util directory. There are a number of points to consider:

    It turns out that quite a bit of what is set up for DoubleMatrix deals with working on a cluster which we are NOT doing. So, DoubleMatrix is not a good design patter for us. For instance,

    However, there is still quite a bit in the setup of the particles example that will be useful.

  2. I am finding it difficult to find my way through all the levels of the particles example; can you help me out?

    Please check out the "minimal" example discussed in http://www.cs.rit.edu/usr/local/pub/ncs/parallel/Anhinga/'s Readme.html. I think this is about as small as you can get and do something in parallel.

  3. Is there anything available that is equivalent to the openMP barrier?

    Not really, but the MultiSemaphore can be used for such coordination. It is used in both the example in http://www.cs.rit.edu/usr/local/pub/ncs/parallel/Anhinga/ and in the particles example. Each time a Worker (DoubleMatrixSlice) does a step it up's the MultiSemaphore. When all Worker's (DoubleMatrixSlice's) have up'ped the MultiSemaphore, the next step is carried out if there is still some left to do. The Master (DoubleMatrix) is similarly processing these up's so that it can let the Main program know when enough steps have been done for it to do something else.

  4. Why does the particles example uses MultiHandles rather than OmniHandles?

    The particles example is set up this way so that if you were running on a cluster, there could be a number of different particles programs executing at the same time and they wouldn't interfere with each other.

    The implications for your assignment are that you can go either way. Using MultiHandles though is likely to be more extensible when M2MI works correctly on a cluster.

  5. The instructions for executing the particles example show two parameters named s, the number of Slices. What is the relationship between these s's?

    While my tests show that the application runs with any combination of s's, ideally they will be the same value and will equal the number of processors to be used.

  6. How do we know indicate how many processors are being used?

    That is indicated by the m2mi.maxcalls variable in the m2mi.properties file. Do note though that this number can be modified through a command line argument, e.g.

    java -Dedu.rit.m2mi.maxcalls=4 edu.rit.parallel.particles.Main2 10 1000 10 10 .1 1 4

  7. Do we need to use M2MP with M2MI on an SMP machine?

    The answer is NO! We do NOT need to have either the m2mp.properties file in our home directories, nor do we need to start up the Daemon! The reason is that in this case we are running one Java process which is spawning a number of threads. The Daemon is necessary if there are to be more than one process executing as in the Chat example or if we were trying to implement this on a cluster.

    Another implication of this is that since we have only two teams and are not starting up Daemons, we need not be concerned with having a unique global id for each machine. One team will use parasite and the other paradise.

  8. I downloaded and tried to recompile the files for the particles example, but I ran into problems. What could be wrong?

    You will need to comment out the package statements in each of the source files. This should take care of your problems.

  9. The particles seems very complex. Does our program need to have all those layers?

    No, the particles example is designed to be flexible and should work with both SMP and cluster setups. Thus, there is a lot of structure built in to handle this.

    Here is a broad idea of an approach you might take:


    worker - implemented as a state machine rather than having the master/main invoke separate routines It must do all of these things:


  10. I've been looking more at the code and I don't understand why the STEP method is synchronized in DoubleMatrixSlice. It seems like this forces processors to do their first call to doOneStep sequentially rather than in parallel. I understand why it is necessary for stepPerformed to be synchronized, but not why step and some of the others are. What am I not seeing?

    These methods are synchronized as a matter of general principle, in order to enforce monitor semantics. These methods each access mutable fields in the object. If these methods were not synchronized, it would be possible for one thread to call one method and write the fields at the same time as another thread called another method and wrote the fields, which might lead to an inconsistent state.

    When the main program calls step() on a Multihandle, a separate thread calls step() on each separate Slice object. Each thread is calling step() on a "different" Slice object. It is not the case that multiple threads are calling methods on the "same" Slice object. The step() calls "do" execute simultaneously, not one at a time, because each step() call is synchronizing on a different Slice object, not on the same Slice object.

  11. When attaching a Multihandle to an object, you cast the Multihandle as a Multihandle, e.g.,
    ((Multihandle) handleToAll).attach (master);
    If handleToAll is declared as a Multihandle, why do we need to do the casting?

    You don't. You only have to cast it to type Multihandle if handleToAll is declared to be some other type, such as an interface type.

  12. If my Workers are swapping arrays on their own, is there a reason for them to invoke stepCompleted on the Master before doing their own normal stepCompleted processing?

    It depends on what you want the master to do. In the class DoubleMatrix, the master is swapping the array references, and keeping track of when the step is completed so as to return from the waitForStep() method which the Main program is calling. In your case, if the Master doesn't need to do anything when the Workers finish a step, then there's no need for the Workers to inform the Master when they finish a step.

  13. Isn't the Multihandle invocation at the end of doOneStep already notifying the Master and all of the other Workers that they have finished a step?

    Yes, so the invocation in the Worker's stepCompleted is redundant and unnecessary in our case.

  14. In the stepCompleted in the Master, notifyAll() gets invoked. That doesn't happen in the Worker. How does the MultiSemaphore get set back to 0 for the next step or doesn't it need to?

    It doesn't need to. The way the MultiSemaphore is used, the counters continually go up, and no one ever counts them down again.

Nan C. Schaller
Rochester Institute of Technology
Computer Science Department
102 Lomb Memorial Dr.
Rochester, NY 14623-5608
telephone: +1.585.475.2139
fax: +1.585.475.7100
e-mail: ncs@cs.rit.edu
March 25, 2004