Alan Kaminsky Department of Computer Science Rochester Institute of Technology 4486 + 2220 = 6706
Home Page
Data Communications and Networks II 4003-541-70/4005-741-70 Spring Quarter 2006
Course Page

4003-541-70/4005-741-70
Data Communications and Networks II
Module 1. High Performance Network Programming -- Lecture Notes

Prof. Alan Kaminsky -- Spring Quarter 2006
Rochester Institute of Technology -- Department of Computer Science


Echo Server Version 1

  • Whenever a client sets up a socket connection, read bytes from the socket and write (echo) them back to the socket
     
  • Any number of sockets may be open at the same time
     
  • Design
    • "Old" I/O
      • Class java.net.ServerSocket
      • Class java.net.Socket
      • Class java.io.InputStream
      • Class java.io.OutputStream
    • Multi-threaded design
    • Main program thread accepts connections
    • Each connection processed in its own separate thread
    • Threads obtained from an executor, a.k.a. thread pool (JDK 1.5)
      • Interface java.util.concurrent.ExecutorService
      • Class java.util.concurrent.Executors
         
  • Package edu.rit.datacomm2.echo


Echo Server Version 2

  • Design
    • New I/O (JDK 1.4)
    • Buffers
      • Class java.nio.Buffer
      • Class java.nio.ByteBuffer
    • Channels
      • Class java.nio.channels.ServerSocketChannel
      • Class java.nio.channels.SocketChannel
    • Non-blocking I/O
      • Class java.nio.channels.Selector
      • Class java.nio.channels.SelectionKey
    • Single-threaded design
    • Main program thread does it all -- accepts connections, reads sockets, writes sockets
       
  • Package edu.rit.datacomm2.echo


Echo Server Version 3

  • Java New I/O is supposed to give better performance and scalability for server applications
     
  • Is this really the case?
     
  • Design
    • Same as EchoServer1, plus it measures throughput -- bytes/sec read and written
    • "Old" I/O
    • Multi-threaded design with a thread pool
    • Throughput measurement
      • Class java.util.concurrent.atomic.AtomicInteger
      • Interface java.util.concurrent.ScheduledExecutorService
      • Class java.util.concurrent.Executors
         
  • Package edu.rit.datacomm2.echo


Echo Server Version 4


Echo Server Version 5

  • Design
    • Similar to EchoServer1, plus it measures throughput -- bytes/sec read and written
    • "Old" I/O
    • Multi-threaded design, with a new thread created for each connection (no thread pool)
       
  • Package edu.rit.datacomm2.echo


Throughput Measurements

  • "Hose" traffic generator
    • Like streaming media
    • Few connections, a large quantity of data per connection
    • Traffic generator sets up one connection to the Echo Server, then reads and writes bytes as fast as it can -- like a fire hose
       
  • "Cannon" traffic generator
    • Like a web browser
    • Many connections, a small quantity of data per connection
    • Traffic generator repeatedly sets up a connection to the Echo Server, writes a small block, reads the block, and closes the connection, as fast as it can -- like shooting cannonballs
       
  • Package edu.rit.datacomm2.echo
  • Testbed
    • I used the CS Department's paranoia cluster, normally used for parallel computing
    • Echo Server ran on one backend processor
    • 1 to 8 traffic generator clients ran on other backend processors
    • Traffic went over the cluster's backend network, a 100 Mbps Ethernet
       
  • Throughput measurements (bytes/sec)
                          Hose Client                           Cannon Client
    No. of   -------------------------------------  -------------------------------------
    Clients  EchoServer3  EchoServer4  EchoServer5  EchoServer3  EchoServer4  EchoServer5
    -------  -----------  -----------  -----------  -----------  -----------  -----------
    1        9.01e6       5.98e6       9.11e6       9.66e5       9.14e5       9.13e5
    2        1.16e7       7.10e6       1.20e7       1.62e6       1.42e6       1.31e6
    3        1.15e7       7.73e6       1.21e7       1.78e6       1.65e6       1.43e6
    4        1.17e7       7.88e6       1.20e7       1.81e6       1.68e6       1.45e6
    8        1.17e7       8.50e6       1.17e7       1.81e6       1.76e6       1.42e6
    


  • Conclusions?


The RIT Overlay Network (RON)

  • Purpose
    • To further illustrate high performance network software design using Java New I/O
    • To serve as a testbed for studying data communications in the rest of the course
       
  • RON Architecture
              Layers              Protocols
    +--------------------------+
    |    Application Layer     |
    +--------------------------+
    |   RON Transport Layer    |  Coming soon
    +--------------------------+
    |    RON Network Layer     |  RONP
    +--------------------------+
    | Internet Transport Layer |  UDP
    +--------------------------+
    |  Internet Network Layer  |  IP
    +--------------------------+
    |     Data Link Layer      |  Ethernet
    +--------------------------+
    |     Physical Layer       |
    +--------------------------+
    
  • The RIT Overlay Network Protocol (RONP)

    RON's network layer protocol is the RIT Overlay Network Protocol (RONP). RONP provides an unreliable packet (datagram) delivery service.

    RONP addressing. Each RON endpoint has a unique RONP address, a 32-bit number.

    RONP packet format. A RONP packet consists of a 12-byte header followed by a payload of 0-536 bytes. The header fields are:

    • Bytes 0-3 -- Destination RONP address, most significant byte first. This is the address of the RON endpoint to which the packet must be delivered.
       
    • Bytes 4-7 -- Source RONP address, most significant byte first. This is the address of the RON endpoint that generated the packet.
       
    • Byte 8 -- RON transport protocol ID. This designates the transport protocol at the next higher layer that must process the packet.
       
    • Byte 9 -- Time to live (TTL) in the range 0-255. The source endpoint initializes the TTL field. Each time a RON router receives a packet, the router decrements the TTL field. If the TTL field is 0 after the decrement, the router drops the packet rather than forwarding it.
       
    • Byte 10 -- Reserved for future expansion.
       
    • Byte 11 -- Reserved for future expansion.
       
    The maximum RONP payload size, 536 bytes, is chosen so that a RONP packet will not be fragmented when transferred over the Internet, assuming the minimum Internet MTU of 576 bytes (536 bytes RONP payload + 12 bytes RONP header + 8 bytes UDP header + 20 bytes IP header = 576 bytes MTU).
     
  • Package edu.rit.ron
  • Package edu.rit.ron.test
  • RON router configuration
     
  • RON router operation
     
  • Design of the RON software
    • Class RouterPort encapsulates a datagram channel
    • Class Router encapsulates an I/O thread
    • Single-threaded design; I/O thread reads and writes all channels using a selector
    • Slow data links simulated using timers
      • Send a packet, wait x milliseconds before sending the next packet
      • Receive a packet, wait x milliseconds before forwarding or delivering the packet
      • x depends on the packet length and the port's data rate (bits/sec)
         
  • More on class java.nio.ByteBuffer
    • Capacity, limit, position
    • Byte reading and writing operations
    • Primitive data type reading and writing operations
    • Clear, flip operations
       
  • Demonstrations


The RON Control Message Protocol (RCMP)

  • RCMP is to RON as the Internet Control Message Protocol (ICMP) is to the Internet Protocol (IP)
     
  • RFC 792 -- Internet Control Message Protocol (Postel, 1981)
     
  • RCMP messages (patterned after ICMP)
    • Echo request
    • Echo reply
       
  • Package edu.rit.ron.rcmp
  • Demonstrations

Data Communications and Networks II 4003-541-70/4005-741-70 Spring Quarter 2006
Course Page
Alan Kaminsky Department of Computer Science Rochester Institute of Technology 4486 + 2220 = 6706
Home Page
Copyright © 2006 Alan Kaminsky. All rights reserved. Last updated 23-Mar-2006. Please send comments to ark­@­cs.rit.edu.