Masters Project Announcement: Collaborative Localization in Wireless = Sensor Networks by Saul Rioja Title: Collaborative Localization in Wireless Sensor Networks Candidate: Saul Rioja E-mail: sxr8320@rit.edu Defence Date: 08/02/2010 Time: 3:00 pm Location: RIT URL: http://www.cs.rit.edu/~sxr8320/Documents/FinalReport.pdf Abstract: In the past, deploying Wireless Sensor Networks (WSNs) wasn=92t easy to = do. Time was required to set up all the sensor nodes and record each of = their locations. Then, once an event occurred, the administrator could = see where the event had happened based on the information stored in the = initial setup. Today. things have changed and so have the applications. What users now = require are sensor nodes capable of organizing and finding their = positions themselves, so that once an event occurs, the administrator is = able to see the location of the node provided by the node itself. There = are different ways to achieve this. In this paper, we propose the use of = beacon nodes and a number of anchor nodes to find the position of each = node in the WSN. Chair: Minseok Kwon Reader: Hans-Peter Bischof Observer: Zack Butler Masters Project Announcement: A Study of Compression Techniques For = Numerical Data by Krishna Tummalapalli Title: A Study of Compression Techniques For Numerical Data Candidate: Krishna Tummalapalli E-mail: kxt9094@cs.rit.edu Defence Date: 8/3/2010 Time: 9:00 am Location: Conference room beside CS office, CS department URL: http://www.cs.rit.edu/~kxt9094/index.htm Abstract: In modern world, many more applications are being digitalized every day. = As the digital applications increase, the information needed, processed, = and stored by those applications increase. As digital information = increases, the cost of operation also increases as we need more hardware = to store it and more network bandwidth to transmit it. Here is where compression comes into picture. Compression represents the = same amount of information as the original in less space. Hence we save = disk space when we store compressed data instead of original data. We = also save network bandwidth when we send compressed data instead of = uncompressed data. The price we pay for this saving is the time for = additional CPU cycles needed to compress and decompress data. For many = applications this process of compression and decompression is more = affordable than storing or sending the uncompressed data. There are different kinds of compression/decompression algorithms in = today=92s market. Each algorithm has a different advantage over the = others. For example, one algorithm has better compression speed than the = others. One other algorithm is better at compressing ASCII data than = binary data. One other is better at compressing audio data than video = data. The aim of this project is to find an optimal compression and = decompression algorithm for the system called Spiegel. Spiegel is a = client-server application where a client requests massive amounts of = simulation trace data and a server transfers the requested data to = client. We aim to compress the data that server sends to the client. Chair: Dr. Hans-Peter Bischof Reader: Dr. James E. Heliotis Observer: Dr Minseok Kwon Masters Project Announcement: Simulating Caching in Data Distribution = and Retrieval Algorithms with Hardware Varying Network Simulator by = Abhishek Prabhune Title: Simulating Caching in Data Distribution and Retrieval Algorithms = with Hardware Varying Network Simulator Candidate: Abhishek Prabhune E-mail: app6790@cs.rit.edu Defence Date: 11th August,2010 Time: 3:00 pm Location: TBD URL: www.cs.rit.edu/~app6790 Abstract: There are a lot of network simulators available today but none of them = has the ability to find the impact of hardware on performance of data = distribution and retrieval algorithms in file systems. Hence, Alexander = G. Maskovyak, a graduate student at RIT Computer Science, decided to = build a simulator which would have the above mentioned ability. This = simulator has the ability to describe the hardware characteristics of = the client as well as the server. Two distribution and retrieval = algorithms have been implemented on this simulation framework. However, = there were no caching algorithms implemented in this project. Hence, the = main goal of my master=92s project was to design and implement different = caching algorithms and understand the impact of hardware on their = performances. Since, cache plays an important part in determining the = performance of any file system this project is quite significant. Chair: Prof. Hans-Peter Bischof Reader: Prof. Minseok Kwon Observer: Prof. Joe Geigel Report URL: www.cs.rit.edu/~app6790 Anonymous Report ULR: www.cs.rit.edu/~app6790 Masters Project Announcement: Securing the DHT ID Mapping Scheme in = Structured Peer-to-Peer Networks by Tejas Dharamshi Title: Securing the DHT ID Mapping Scheme in Structured Peer-to-Peer = Networks Candidate: Tejas Dharamshi E-mail: tbd3057@rit.edu Defence Date: Thursday, August 26th, 2010 Time: 10:00 am Location: Database Lab(Room #3600) URL: https://sites.google.com/site/tbd3057mastersproject/ Abstract: Peer-to-peer (P2P) architectures are a type of networks in which, each = workstation acts as both client and server, having equal = responsibilities and capabilities. They have become very popular because = of their extensive use in file sharing applications. The most common = type of P2P architectures are the =91decentralized and structured=92 = architectures, which make use of Distributed Hash Tables (DHTs) for = indexing. They are also known as DHT based P2P networks. These DHT based = networks like Chord and Pastry, make use of node identifiers to = structure and organize nodes across an identifier space. The robustness = and security of such DHT based P2P networks are affected significantly = when a Sybil attack takes place. In a Sybil attack, a malicious user = obtains a large number of phony identities and pretends to behave as = multiple distinct nodes in the system with a view to =93out vote=94 = honest users. In this project, I present a node identifier scheme that is = decentralized in nature and which will resist Sybil attack. The proposed = scheme will resist the following classes of Sybil attack, 1)A single participant obtaining a large number of node identifiers by = presenting the same IP address and different port numbers. 2)A single participant compromising other participants by obtaining a = large number of node identifiers there by forming a Group Sybil attack. Chair: Dr. Minseok Kwon Reader: Dr. Zack Butler Observer: Dr. Hans-Peter Bischof Masters Project Announcement: Demonstrating Realistic Avatar Control in = a Virtual Environment Through the Use of a Neural Impulse Actuator by = Emmett Coakley Title: Demonstrating Realistic Avatar Control in a Virtual Environment = Through the Use of a Neural Impulse Actuator Candidate: Emmett Coakley E-mail: eoc8473@rit.edu Defence Date: September 7th, 2010 Time: 2:00 pm Location: ICL1-70-3520 URL: http://eoc8473mastersproject.blogspot.com/ Abstract: This project focused on the development of techniques to improve realism = within the field of three-dimensional avatar control. This was = accomplished by replacing a traditional hand-based peripheral controller = with a Neural Impulse Actuator headset, a device which read and reported = a user=92s brainwaves in real-time. The avatar=92s Virtual Environment = was designed to make use of the headset=92s output features. A series of = headset-based trigger events were implemented, each of these allowing = the user to alter the environment based upon a series of preconditions. = These requirements were most often met via control of the Neural Impulse = Actuator device (NIA). The project=92s success rate was based upon how = well a new user was able to interact within the environment, with = regards to adapting to the system, influencing the virtual world, and = performing faster with neural-based commands than with keyboard-based = commands. Chair: Joe Geigel Reader: Reynold Bailey Observer: Hans-Peter Bischof Masters Project Announcement: Co-operative Caching using Hints on = Distributed File Systems by Zalak Maniar Title: Co-operative Caching using Hints on Distributed File Systems Candidate: Zalak Maniar E-mail: zdm5837@rit.edu Defence Date: September 17, 2010 Time: 1:00 pm Location: TBA URL: http://www.cs.rit.edu/~zdm5837/ Abstract: With the advent of Distributed File System, sharing of data among the = various nodes in the network became possible, but there is a huge = overhead associated while retrieving data from the remote storage disks. = So, instead of retrieving data from the remote disk, retrieving data = from remote memory is faster.This led to the concept of co-operative = caching. Here, the cache contents of all machines connected in the = network are combined and co-ordinate to form the global cache structure. Co-operative caching provides large memories of all clients, maintaining = high hit rates in the clients=92 local caches and saving the network = latencies by forwarding the request to the server. Co-operative caching = introduces one more level of cache memory hierarchy i.e. remote client = memory. There are various co-operative caching techniques like Direct = Client co-operation, greedy forwarding, centrally coordinate caching, N- = Chance forwarding, hint based algorithms and many others. I plan to = implement a decentralized algorithm - co-operative caching using hints = rather than the actual facts of the system and then provide a comparison = study between the traditional N-Chance forwarding, N-Chance forwarding = with Predictive Pre-fetching and the hint based algorithm. Chair: Dr. Hans Peter Bischof Reader: Dr. Minseok Kwon Observer: TBA Masters Project Announcement: RealDB: Low-Overhead Database for = Time-Sequenced Data Streams in Embedded Systems by Jason Winnebeck Title: RealDB: Low-Overhead Database for Time-Sequenced Data Streams in = Embedded Systems Candidate: Jason Winnebeck E-mail: jpw9607@cs.rit.edu Defence Date: October 5, 2010 Time: 2:00 pm Location: ICL1 (70-3520) URL: http://www.gillius.org/realdb/ Abstract: Embedded sensor monitoring systems deal with large amounts of live = time-sequenced stream data. The embedded system requires a lower = overhead data store that can work with limited resources and be able to = run reliably and unattended even in the face of power faults. Relational database management systems (RDBMS) are a well-understood and = powerful solution capable of storing time-sequenced data; however, many = have a high overhead, are not sufficiently reliable and maintenance = free, or are unable to maintain a hard size limit without adding = substantial complexity. RealDB is a specialized solution that capitalizes on the unique = attributes of the data stream storage problem to maintain maximum = reliability in an unstable environment while significantly reducing = overhead from indexing, space allocation, and inter-process = communication when compared to a traditional RDBMS-based solution. Chair: Henry A. Etlinger Reader: Alan Kaminsky Observer: T.J. Borrelli Masters Project Announcement: Parallel Ray Tracing: Analysis of GPU = Platforms by Anoop Ravi Thomas Title: Parallel Ray Tracing: Analysis of GPU Platforms Candidate: Anoop Ravi Thomas E-mail: art1759@cs.rit.edu Defence Date: Oct 14th, 2010 Time: 1:00 pm Location: Room GOL-3405: Graphics Lab URL: http://www.cs.rit.edu/~art1759/projects/masters/ Abstract: With the advent of multi-core processor technology, more and more = applications are taking advantage of the fact that there are multiple = processors at their disposal, and so, the and result is that they run = much faster. Also, more and more complex applications are arising due to = this technology. Real-time ray tracing is one such application that has = always been the ultimate goal for computer graphics for many years. With = todays processors, this is very achievable. The GPU has evolved to = become a stream processor, which can be used to run parallel = applications. The issue is that there are a lot of standards and = programming languages for GPUs arising and so it becomes confusing for = the developers to decide which =E2=80=9Dcompute=E2=80=9D platform to use = for developing parallel application. This project aims to distinguish = the differences between each of the compute platforms, and also analyze = the various platforms based on a set of metrics. I have implemented a = Ray-Tracing Engine that supports both the OpenGL and DirectX display = platforms and uses one of the following compute platforms for generating = the ray-traced scene: 1. OpenCL 2. DirectCompute 3. CUDA 4. HLSL Pixel Shader I have also implemented a solution for programmatically de=EF=AC=81ning = scenes using a set of primitives and performing animation. Using a set = of scenes, I have made an analysis and comparison of these platforms = against each other and I will be presenting this analysis in this paper. Chair: Prof. Reynold Bailey Reader: Prof. Joe Geigel Observer: Prof. Warren R. Carithers Masters Project Announcement: A framework to test schema matching = algorithms by Bhavik Doshi Title: A framework to test schema matching algorithms Candidate: Bhavik Doshi E-mail: bkd4833@rit.edu Defence Date: Friday, October 22, 2010 Time: 12:00 pm Location: GOL-3600 (Database & Robotics Lab) URL: http://www.cs.rit.edu/~bkd4833 Abstract: Schema matching plays an important role in the architecture of data = integration and is the process of identifying semantically related objects. Schema matching = can be described as a process in which source schema elements are mapped with the target = schema matching elements. It plays a critical role in enterprise information integration = and has been a popular data management research topic, particularly in building data = warehouses and marts. Due to the subjective nature of schema matching, automating the = process becomes complex to achieve, though efforts have been made to make it = semi-automatic. In addition traditional techniques take advantage of only one of the aspects from = syntax, semantics or data and their probability distribution. By exploiting individual = features it becomes difficult to increase the success rate as each = approach is implemented independently of each other. The latest development in this field is the use of an holistic approach = for schema matching which is domain independent and works on the principle of integrating = different match processes. It now became essential to test the feasibility of these = approaches in real world schemas and examine their behavior. Initial results show that as the = data similarity and number of instances vary the results of each of these methods vary. To = address these issues, this project proposes and develops a framework to test the viability of = traditional and the holistic approaches in the real-world scenario. It assesses system and = matching efficiency based on evaluation metrics and other schema parameters. Furthermore, = this project uses design of experiments to test the methods and draw statistical = conclusions. Chair: Rajendra K. Raj Reader: Carol J. Romanowski Observer: Trudy M. Howles Masters Project Announcement: Cyberaide Creative: On-Demand Deployment = of Cyberinfrastructure by Casey Rathbone Title: Cyberaide Creative: On-Demand Deployment of Cyberinfrastructure Candidate: Casey Rathbone E-mail: casey.rathbone@gmail.com Defence Date: 10/11/10 Time: 9:00 am Location: 70-3475 URL: http://people.rit.edu/~ctr7867/ Abstract: As demand for cloud and grid computing solutions increase, the need for = user oriented software to provide access to these resources also = increases. Until recently the use of computing resources was limited to = those with exceptional knowledge of the system design and configuration. = With the advent of grid middleware projects this started to change = allowing new users not familiar with complex grid infrastructure and = client software to leverage complex computing systems for their own = research. The Cyberaide Gridshell demonstrated this by developing a = user oriented interface to submit jobs to a grid. Following this same = paradigm it is my objective to create a tool that will take another step = further by abstracting the creation and configuration of virtual = infrastructure and system software away from the end-user. This will be = achieved through the use of cloud resources provided by VMware = virtualization and deployment via a web interface. The tool will = demonstrate the ease and versatility of deploying cyberinfrastructure, = like clusters and grids, on demand within a cloud environment. Chair: Professor Hans-Peter Bischof Reader: Dr. Gregor von Laszeweski Observer: TBA Masters Project Announcement: Cyberaide Creative: On-Demand Deployment = of Cyberinfrastructure by Casey Rathbone Title: Cyberaide Creative: On-Demand Deployment of Cyberinfrastructure Candidate: Casey Rathbone E-mail: casey.rathbone@gmail.com Defence Date: 10/21/2010 Time: 9:00 am Location: 70-3475 URL: http://people.rit.edu/~ctr7867/ Abstract: As demand for cloud and grid computing solutions increase, the need for = user oriented software to provide access to these resources also = increases. Until recently the use of computing resources was limited to = those with exceptional knowledge of the system design and configuration. = With the advent of grid middleware projects this started to change = allowing new users not familiar with complex grid infrastructure and = client software to leverage complex computing systems for their own = research. The Cyberaide Gridshell demonstrated this by developing a = user oriented interface to submit jobs to a grid. Following this same = paradigm it is my objective to create a tool that will take another step = further by abstracting the creation and configuration of virtual = infrastructure and system software away from the end-user. This will be = achieved through the use of cloud resources provided by VMware = virtualization and deployment via a web interface. The tool will = demonstrate the ease and versatility of deploying cyberinfrastructure, = like clusters and grids, on demand within a cloud environment. Chair: Professor Hans-Peter Bischof Reader: Dr. Gregor von Laszeweski Observer: TBA Masters Project Announcement: Face Identification Using Edge Detection = and Skin Texture Modeling by Rachel Elizabeth Manoni Title: Face Identification Using Edge Detection and Skin Texture = Modeling Candidate: Rachel Elizabeth Manoni E-mail: rachel.manoni@gmail.com Defence Date: 10/29/10 Time: 10:00 am Location: 70-3576 URL: https://sites.google.com/site/rachelmanoni/ Abstract: Recent face identification algorithms attempt to automatically identify = specific people in digital images. This has merit in security systems = and in personal use in identifying friends and family in digital = photography. Current systems are not robust enough to accurately = identify the same individual in different images with changes in facial = pose, facial expression, occlusion, changes in hair, illumination, = aging, etc. A new approach is needed to tackle these different = environment conditions. This project looks at how three different = approaches classify 44 individuals using a limited training set under = three different lighting conditions, 9 poses, and four facial = expressions. Edge detection uses the underlying facial features and structures to = classify an individual, experiments showed this model to be invariant to = pose. Skin texture model uses the uniqueness of skin textures for each = person to identify an individual and is not affected by lighting = conditions. By combining the two models into one, the result is a = facial identification system that is more robust to pose and = illumination conditions, as well as some facial expressions. Chair: Roxanne L. Canosa Reader: Reynold Bailey Observer: Joe Geigel Masters Project Announcement: Population Based Sound for Particle = Systems Through Granular Synthesis by Christopher J. Murdock Title: Population Based Sound for Particle Systems Through Granular = Synthesis Candidate: Christopher J. Murdock E-mail: murdock.cj@gmail.com Defence Date: 11/04/2010 Time: 2:00 pm Location: 70-3405 (Graphics Lab) URL: http://www.cs.rit.edu/~cjm8034/mastersSite.html Abstract: Particle systems are often used as effective tools in computer generated = scenes for rendering visual phenomena with fuzzy boundaries. In many cases, = simulations created with particle systems lack a soundtrack to accompany them. Even when a = soundtrack is present however, the sound is commonly a recording of a similar = phenomena occurring in the real world and inherently has little to do with what the audience is = seeing in the computer generated scene. Granular Synthesis is a technique which = similarly seeks to create emergent and overarching effects based on a population, but does = so in the realm of sound. The following report details the implementation of a system = which applies granular synthesis to create believable population based sound to = accompany a visual particle system. The system both creates custom granulation parameters = based on information about a target particle system, as well as utilizes a = genetic algorithm to create random granulation parameters, using an end user to judge the fitness of = each parameter set. The parameters created by these mechanisms, as well as an input = audio file, are passed to an existing granular synthesis program which outputs = =93granulated=94 versions of the input audio file for an end user to evaluate and evolve as they see = fit. The custom as well as evolved parameter sets are compared and can even be combined by = the user in order to create an appropriate sound track to match what the audience = sees. This process serves to create population based sound with a more direct correlation = to what is being visually observed in the particle system video, and could be a very = useful tool for computer graphics developers who wish to add sound to their existing = particle systems. Chair: Joe Geigel Reader: Warren Carithers Observer: Masters Project Announcement: Comparing NegaScout and MTD(f), and = introducing NegaAAC* for Chess by Nicholas Ver Hoeve Title: Comparing NegaScout and MTD(f), and introducing NegaAAC* for = Chess Candidate: Nicholas Ver Hoeve E-mail: nav5463@rit.edu Defence Date: 11/12/2010 Time: 10:00 am Location: ICL5 URL: = https://docs.google.com/leaf?id=3D0B4y0I9PPksCRZWE2MjBkMDItODVlNS00ZGUzLTg= 3YmQtZDhkNWNkODYyNDJh&hl=3Den&authkey=3DCM2-v80K Abstract: Computer chess engineers have sought to increase performance in their = engines by searching fewer nodes. The quality of the heuristics used = eventually outgrew the Alpha-Beta search algorithm, and its two major = replacements, MTD(f) and NegaScout both take greater advantage of = accurate heuristics. Although MTD(f) is considered to be slightly = superior, NegaScout is often regarded as generally more practical. The = performance of both algorithms is compared, specifically facilitating = the performance needs of each algorithm. Additionally, I present a new = variant of the MTD framework which offers and improvement in stability = over MTD(f). Chair: Zack Butler Reader: Leon Reznik Observer: TBA Masters Project Announcement: Comparing Global And Local Recoding = Anonymization by Mayank Goel Title: Comparing Global And Local Recoding Anonymization Candidate: Mayank Goel E-mail: mxg9811@rit.edu Defence Date: 11/16/2010 Time: 10:00 am Location: GOL-3600 (CS Database Lab) URL: http://www.cs.rit.edu/~mxg9811 Abstract: Recently, privacy of released consumer micro-data has become a serious = cause for concern amongst users. Micro-data refers to data shared in its = raw, non-aggregated form and is often used for data analysis. = Anonymization is the process of removing or modifying identifying = variables from the released micro-data making it harder to identify a = respondent uniquely. Disclosure or re-identi=EF=AC=81cation occurs when = an entity learns previously unknown information from the released = micro-data. It is well known that during data analysis not all = attributes have the same utility. Signi=EF=AC=81cant amount of research = has been done on ef=EF=AC=81cient methods for anonymization that can = help protect user privacy while reducing the information loss as much as = possible. However, none of these methods consider the utility of attributes in the released micro-data. This project compared three algorithms based on the generalization = method used. Global recoding anonymizationmethod was compared against = utility-based local recoding anonymization methods. Information loss and = data utility were measured using performance metrics. A framework to = specify utility of numerical and categorical data attributes was = developed. This project provides background information on data = anonymization, a hypothesis on data anonymization, a discussion on = recent developments in this =EF=AC=81eld. It also provides an = application development and testing plan to verify the stated hypothesis Chair: Rajendra K. Raj Reader: Carol J. Romanowski Observer:= Title: Application of the Dendritic Cell Algorithm to Multiple Attack = Detection Candidate: Jerry Saravia E-mail: jxs1533@rit.edu Defence Date: 01/13/2011 Time: 10:00 am Location: GCCIS URL: http://www.cs.rit.edu/~jxs1533/files/writeup.pdf Abstract: The aim of this project is to gain an understanding of the performance of the Dendritic Cell Algorithm (DCA) under the presence of more than one attack type as well as introduce an extended definition of the algorithm that allows the algorithm to identify more than one attack type while maintaining the DCA's performance. In previous iterations of the DCA the dendritic cells are exposed to a single context. The extended definition proposed here exposes the dendritic cells to more than one context at a time. The exposure to multiple contexts allows the DCA to detect multiple attacks types. Chair: Roger Gaborski Reader: Paul Tymann Observer: Yuheng Wang Masters Project Announcement: A Schematic Approach for Distributed = Search Engine in Structured Peer-to-Peer Networks for Full-text = Searching by Niranjan Kabbur Title: A Schematic Approach for Distributed Search Engine in Structured = Peer-to-Peer Networks for Full-text Searching Candidate: Niranjan Kabbur E-mail: nxk4958@rit.edu Defence Date: 18th January 2011 Time: 12:00 pm Location: GOL-3600 (CS Database Lab) URL: https://sites.google.com/site/nkabburproject/ Abstract: Peer-to-peer (P2P) applications have been very popular ever since they = came to public attention because of being extensively used for file = sharing purpose. The architecture of the peer-to-peer systems has = evolved periodically overcoming the problems of scalability, performance = and more importantly single point of failure prominently witnessed in = centralized servers. The earliest concept of peer-to-peer networks was = largely unstructured group of peers which used flooding technique for = peer and data look-up within the network. The latest generation of = peer-to-peer systems has structured topology which eliminates the need = for flooding the network for look-up. Such structured peer-to-peer = systems obtain efficiency in look-up using various approaches like = search using global peer index, hybrid local-global peer indexes and DHT = based search which provides exact keyword search. Although structured P2P offers a viable infrastructure to index, manage, = and search contents in a large-scale distributed system, they are most = effective for exact keyword searches. But in reality, any user searching = for a file has tendency for spelling variations of the keywords. This project aims to develop an approach to achieve efficient non-exact = keyword search (search for keywords with typos or spelling variations) = for structured peer-to-peer systems and ranking the search results based = on document relevancy. Chair: Dr. Minseok Kwon Reader: Dr. Manjeet Rege Observer: Dr. Rajendra K. Raj= Masters Project Announcement: Self-Localization Using Objects as = Landmarks by Lisa M. Tiberio Title: Self-Localization Using Objects as Landmarks Candidate: Lisa M. Tiberio E-mail: lmt4636@cs.rit.edu Defence Date: 02/04/11 Time: 2:00 pm Location: 70-3688 URL: http://www.cs.rit.edu/~lmt4636 Abstract: Self-localization in an indoor setting is a current topic of research. = Indoor self-localization is the process of knowing your location = relative to your surroundings. The current approaches can be broken down = into three types: vision based, non-vision based, and hybrid approaches. = The goal of this project is to be able to determine self-localization by = computer vision recognition of natural landmarks in indoor environments. = GPS is an example of a non-vision based approach, however it does not = work indoors [Turgut and Martin (2009)], therefore other methods must be = used to determine location in an indoor setting. This project involves eye-tracking individuals as they walk along = various corridors in a single building on campus. A search of the = current literature shows this to be a novel approach to indoor = self-localization. The goal is to determine the location of the = individual (the floor) from data collected during two phases. Phase one, = the training phase, consists of collecting video of the scene as an = individual walks along corridors on all floors of a building. Processing = the training data consists of superimposing a grid over selected frames = of the scene video and extracting features from patches surrounding = where the grid lines intersect. These features are used for = classification purposes and placed in an array which represents a = pseudo-map of the floor. These feature vectors are used as natural = landmarks for the recognition process. Phase two, the testing phase, = consists of monitoring and recording the fixations of subjects walking = along one or more of the corridors. Features are extracted from a small = region surrounding the fixation point. The fixation feature vectors are = compared to the feature vectors from each of the training sets, and a = distance metric is used to compare the fixated features to the stored = features determined from the training phase. The location of the = individual is determined by performing a classification of each test = instance against the training data. Evaluation of optimal features to be = extracted is determined. Variation in orientation and lighting = conditions is considered when selecting features. Chair: Dr. Roxanne L. Canosa Reader: Dr. Mangeet Rege Observer: Dr. Joseph Geigel Masters Project Announcement: Extraction of Fire Line Variables from = Multispectral Infrared Images by Abhijit Pillai Title: Extraction of Fire Line Variables from Multispectral Infrared = Images Candidate: Abhijit Pillai E-mail: ahp1252@rit.edu Defence Date: Feb 25, 2011 Time: 10:00 am Location: CS Breakout #3, Room No - 3576 URL: http://sites.google.com/site/abhijitpillai/ Abstract: Planned and unplanned wildland fires need to be assessed in real time = and managed under challenging conditions, given the potentially destructive effects = of such fire events. Examples of these effects include wildfire impacts on = infrastructure and homes, climatic conditions, standing biomass in vegetated environment and = reduction/change of local species habitat. It does become necessary to formalize mitigation = strategies that do not rely on field efforts, given the dangerous field environment = created by wildland fires. Remote sensing is a key technological tool used to monitor the = progress of wildland fires, with sensors on platforms ranging from satellites in = space to high and low altitude aircraft, to tower or ground level. Airborne sensors are = designed to capture data and images over large areas and at high spatial resolution in = particular. Data obtained from these sensors can help the decision maker to design an effective = response plan to mitigate disaster events. However, transforming images and data into = meaningful information poses a research challenge that requires our continued = attention. In this project, I have implemented and integrated an algorithm for the = extraction of fire- line variables into the RIT WASP (Wildfire Airborne Sensor Program) = sensor workflow. This includes (i) coding of the actual algorithms in a format that = dovetails with the current WASP software architecture, (ii) inserting algorithms at the = proper workflow stage, and (iii) evaluating the algorithm=92s function in a simulated = data collection, processing, and product production environment. My overall project is to = enable generation of real-time fire line and propagation products that are = essential to fire managers and disaster decision makers. The proposed strategy, if = implemented properly, would thus enable high level decision making, by providing = disaster management products in an accurate, timely, and reliable fashion. Chair: Dr. Hans-Peter Bischof Reader: Dr. Anthony Vodacek Observer: Dr. Jan Van Aardt Masters Project Announcement: Foreign Exchange Rate Prediction Using = Genetic Algorithms and Neuro-Fuzzy Systems by Samuel Jacques Lallemand = Jr. Title: Foreign Exchange Rate Prediction Using Genetic Algorithms and = Neuro-Fuzzy Systems Candidate: Samuel Jacques Lallemand Jr. E-mail: sjl9945@cs.rit.edu Defence Date: February 24th 2011 Time: 11:00 am Location: 70-3672 URL: http://lnpcreations.com/sjl9945/ Abstract: One of the major shortcomings of International Economics and = Macroeconomics has been the inability of theoretical models of exchange = rate to fit empirical data at both short and long-term horizons. =46rom = an empirical perspective, we present an ex-post analysis for the CAD/USD = Dollar spot exchange rate forecasting, using a relatively new variable = from the field of microstructure (order flow) along with some = conventional macro data. The approach in this project extends the = traditional domain of variables used in exchange rate forecasting by = combining transactional order flows generated by major foreign exchange = actors such as: commercial clients, foreign institutions and interbank = transactions collected over a period of ten years by the Bank of Canada = with conventional variables generally considered in foreign exchange = modeling (e.g crude oil price, interest rate). Using both Genetic = methods and Neuro-Fuzzy systems, we provide empirical evidence of the = importance of order flow in exchange rate determination, thus adding = another case study to the literature in favor of continued research in = that area. Chair: Dr. Leon Reznik Reader: Dr. Joe Geigel Observer: Dr. Hans-Peter Bischof Masters Project Announcement: Recreating finger motion from audio data = for live performance in virtual space by Pranabesh Sinha Title: Recreating finger motion from audio data for live performance in = virtual space Candidate: Pranabesh Sinha E-mail: pranabesh.sinha@gmail.com Defence Date: April 5th 2011 Time: 11:00 am Location: 70-3600 URL: http://midirealtime.blogspot.com/ Abstract: Although motion capture allows us to animate human motion, the data = first needs to be processed before it can be applied to models. Hence if this data is used = directly in real time, the resulting animation will have artifacts. If = there are multiple joints in a small area, such as in fingers the amount = of noise in the data will be even higher. The purpose of this project is to create an alternative technique by = which finger movement while playing a musical instrument such as the piano can be = animated in real time by analyzing the music that is being played. Chair: Dr. Joe Geigel Reader: Dr. Reynold Bailey Observer: Dr. Warren R. Carithers Masters Project Announcement: Implementing a Data Quality Module in an = ETL Process by Adarsh Atluri Title: Implementing a Data Quality Module in an ETL Process Candidate: Adarsh Atluri E-mail: axa8298@cs.rit.edu Defence Date: April 26, 2011 Time: 9:00 am Location: GOL-3405 (RND Lab) URL: http://people.rit.edu/axa8837/ Abstract: Over the past few decades, there has been an explosion of data with the = arrival of the Internet and network based information systems. The = exponential increase in data has caused information to be stored in a = wide variety of storage systems located all over the world. This = dispersion of data accompanied by the increase in the relevance and = dependency upon this data has caused data quality issues to become more = complex. Therefore, the ability to maintain the data quality and handle = bad data is key to ensuring the accuracy and completeness of the data in = a data storage system. Nowadays, most of the data warehouse projects = integrate the data quality phase into the data warehouse load = process(ETL process) without allocating enough time for ef=EF=AC=81cient = data quality validation. Bad quality data can have a very adverse impact = on a data warehouse or mart. Data issues caused due to constraint = violation, data type mismatch, data duplication and data incompleteness = may cause the ETL process to fail thereby causing unsuccessful loads to = the data warehouse. This greatly impacts the accuracy and completeness = of the data stored in the data warehouse. This project implements a data quality module in which a set of data = quality rules are de=EF=AC=81ned for each column in a database. When = executed, this module generates stored procedures based on these rules. = The stored procedures are then compiled and executed on the respective = database. The data quality module handles the bad data based on the = con=EF=AC=81guration of the data quality rules. This module has been = integrated into an ETL process and the data quality rules have been = de=EF=AC=81ned on the data being loaded into the staging area of a data = warehouse/mart. This paper shows that such a data quality module will = reduce the number of defects in a database thereby improving the data = quality. This paper also shows that the performance of the ETL process = can be improved as the data quality module allows the ETL process to = process the data in bulk. Chair: Rajendra K. Raj Reader: Carol J. Romanowski Observer: Trudy M. Howles Masters Project Announcement: Real time surgical simulation using = deformable meshes by Madayi Kolangarakath Rohit Title: Real time surgical simulation using deformable meshes Candidate: Madayi Kolangarakath Rohit E-mail: rsm1792@cs.rit.edu Defence Date: May-13-2011 Time: 10:00 am Location: Graphics Lab URL: http://www.cs.rit.edu/~rsm1792/projects/surgical/surgical_index.htm Abstract: A large number of surgeries nowadays are performed using surgical = robots, which allow for high precision incisions with minimal error. = Traditionally, animals and cadavers have been used for training = purposes. However, these are expensive resources and are not available = in all facilities. Furthermore, animal testing has met with opposition from animal rights = groups and so, many have turned towards virtual surgical simulations. = Virtual simulations allow surgeons to gain experience using robotic = surgery equipment at minimal cost and with almost instantaneous feedback = allowing for a realistic experience. Simulations are not without their = drawbacks though. One of the biggest challenges involves achieving real-time interaction = and feedback. As it is, modeling the physics involved in the surgical = procedure is difficult enough, reproducing the results in real-time adds = considerably to the computational overhead. Statistics have shown that = patients are advised to seek a surgeon who has experience with atleast = 250 surgeries, hence for training purposes it is imperative that the = simulator provide as realistic an experience as possible. In this project, we intend to develop a surgical simulator that will = serve as a training tool which will allow surgeons to practice on = virtual models, thus minimizing cost while providing a holistic and = beneficial experience. Specifically, this will be a prostatectomy = (prostate surgery) simulator. Modelers will be developing the models of = the organs and the surgical instruments and our job will be to develop = the software framework for the surgical simulator. The models will be = represented as deformable meshes and soft body physics will be = implemented for the surgical simulation. Chair: Prof. Reynold Bailey Reader: Prof. Warren Carithers Observer: Prof. Joseph Geigel Masters Project Announcement: Subgraph Isomorphism: Special Subgraphs = and Cliques by Christopher Tang Title: Subgraph Isomorphism: Special Subgraphs and Cliques Candidate: Christopher Tang E-mail: chrisftang@gmail.com Defence Date: Friday, May 13th 2011 Time: 9:00 am Location: Breakout Room 3 Bldg 70 Room 3576 URL: http://www.cs.rit.edu/~cft9085/ Abstract: Subgraph isomorphism is a well-known NP-complete problem. It asks if = given two graphs, G and H, is G isomorphic to some subgraph of H. Any = solution to this problem will suffer from a worst-case complexity that = is exponential in the size of the input. However, there are many = special cases for which there exist efficient algorithms. This paper = provides a review of subgraph isomorphism and special cases depending on = the pattern graph G. Various theoretical results have been gathered in = addition to the evaluation of real-world performance for several = algorithms, with a special emphasis on the performance of maximum clique = algorithms. In the majority of published algorithms, the new techniques are = evaluated by comparing their performance using numbers from the = previously published benchmarks of other algorithms. This is often done = by making educated guesses at a scaling factor between hardware platform = performance rather than using any experimentally derived value. These = comparisons can only give a rough idea of the algorithms' relative = performance. The fact that certain methods do better on some classes of = problems than on others makes it even more difficult to glean useful = information those publications. For each problem (subgraph isomorphism and maximum clique) several = notable algorithms were benchmarked on the same platform using a common = set of problem instances. In doing so it is possible to obtain a = clearer understanding of their performance characteristics. Chair: Dr. Edith Hemaspaandra Reader: Dr. Chris Homan Observer: Dr. Ivana Bez=E1kov=E1 Masters Project Announcement: Distributed Data Mining in Peer-to-Peer = Networks by Uthra Natarajan Title: Distributed Data Mining in Peer-to-Peer Networks Candidate: Uthra Natarajan E-mail: uxn4932@cs.rit.edu Defence Date: 05/10/2011 Time: 1:00 pm Location: GCCIS CS 70-3672 Break Out Room 4 URL: http://www.cs.rit.edu/~uxn4932 Abstract: Distributed Data Mining (DDM) is an emerging facet of data mining, a new = area of research growth, and is gaining popularity in advanced = data-driven domains. DDM can be used in managing large amounts of data = efficiently in distributed environments. There are various sequential mining algorithms that discover association = rules in a data set; extending them to distributed data involves = communication overhead. In order to overcome this overhead an efficient = association rule mining algorithm is needed that operates on a = decentralized database. Privacy is another important concern to consider = while designing such an algorithm, as large amounts of data are being = transferred between the participating sites. Distributed Mining of = Association rules (DMA) is an algorithm used to mining association rules = quickly. The goal of this project is to implement the DMA algorithm to = find association rules over horizontally distributed databases, improve = the performance of the DMA algorithm, perform a comparative study of = centralized and distributed mining results, and investigate = privacy-preserving approaches that mask sensitive information at each = site. Chair: Dr. Carol Romanowski Reader: Dr. Ivona Bez=E1kov=E1 Observer: Dr. Roger S. Gaborski= Masters Project Announcement: Distributed Data Mining in Peer-to-Peer = Networks by Uthra Natarajan Title: Distributed Data Mining in Peer-to-Peer Networks Candidate: Uthra Natarajan E-mail: uxn4932@cs.rit.edu Defence Date: 05/10/2011 Time: 1:00 pm Location: GCCIS CS 70-3672 Break Out Room 4 URL: http://www.cs.rit.edu/~uxn4932 Abstract: Distributed Data Mining (DDM) is an emerging facet of data mining, a new = area of research growth, and is gaining popularity in advanced = data-driven domains. DDM can be used in managing large amounts of data = efficiently in distributed environments. There are various sequential mining algorithms that discover association = rules in a data set; extending them to distributed data involves = communication overhead. In order to overcome this overhead an efficient = association rule mining algorithm is needed that operates on a = decentralized database. Privacy is another important concern to consider = while designing such an algorithm, as large amounts of data are being = transferred between the participating sites. Distributed Mining of = Association rules (DMA) is an algorithm used to mining association rules = quickly. The goal of this project is to implement the DMA algorithm to = find association rules over horizontally distributed databases, improve = the performance of the DMA algorithm, perform a comparative study of = centralized and distributed mining results, and investigate = privacy-preserving approaches that mask sensitive information at each = site. Chair: Dr. Carol Romanowski Reader: Dr. Ivona Bez=E1kov=E1 Observer: Dr. Roger S. Gaborski= Masters Project Announcement: Managed Content Publish-Subscribe System = Using Generic Delivery Mechanisms by Jason M. Christopher Title: Managed Content Publish-Subscribe System Using Generic Delivery = Mechanisms Candidate: Jason M. Christopher E-mail: controlsengineer@gmail.com Defence Date: 2011-05-12 Time: 9:00 am Location: 70-3405 URL: http://people.rit.edu/~jmc4861 Abstract: This MS project represents a new look at the combination of different = data related technologies in an effort to find a means to increase the = efficiencies associated with active database systems and = publish/subscribe middleware. The systems designed for the application = compare a traditional middleware messaging service to one developed = utilizing XML documents for data exchange, and compare the throughput = associated with each approach. Test results were compared over a large = sample set to see if any efficiencies could be realized. Chair: Dr. Trudy Howles Reader: Dr. Rajendra Raj Observer: Dr. Carol Romanowski Masters Project Announcement: Remote Access to Sensor Networks by Gaurav = Raje Title: Remote Access to Sensor Networks Candidate: Gaurav Raje E-mail: grr4505@rit.edu Defence Date: 05/12/2011 Time: 1:00 pm Location: 70-3576 Breakout Room URL: http://www.cs.rit.edu/~grr4505/home.html Abstract: Sensor Networks have always been a great tool for monitoring and = predicting various parameters. However, the difficulty in setting up and = controlling them has been a daunting factor in front of many people. In = my project, I have created an application which provides tools for = setting up, monitoring, controlling and accessing environmental sensor = networks. This includes a custom handshake protocol, plug and play soft = adding capability, a web interface for remote administration and a GIS = based interface for monitoring of the sensors. This project has been = developed with a special emphasis on the various object oriented = programming concepts and design patterns. Chair: Leon Reznik Reader: Fereydoun Kazemian Observer: Richard Zanibbi Masters Project Announcement: Language-Based Procedural Modeling for = Randomized Scene Construction by Andy Scott Title: Language-Based Procedural Modeling for Randomized Scene = Construction Candidate: Andy Scott E-mail: ars9753@rit.edu Defence Date: May 17, 2011 Time: 2:00 pm Location: 76-3215 URL: http://andyscottmastersproject.blogspot.com/ Abstract: The process of building test scenes to use in simulation-based settings = can be a time-consuming and menial task. The focus of this project was = to streamline this process by accepting scene specification via a = text-based medium. Conceptually, a user would simply write a paragraph = which describes the features and layout of a scene and use a software = tool to turn their description into an actual 3-D rendering. However, = attempting to process all possible constructs of the english language is = infeasible; thus a grammar was crafted which defines a language specific = to the context of positioning scene features. The software tool I wrote = to implement the conversion from my language to scene renderings = abstracts the notion of a scene "feature": A feature may be a single = object, a sub-scene, or a grouping of either. This allows for complex = feature conceptualization with the sole pre-condition of having access = to a set of single-object geometries. With the additional ability to = apply a variance scheme to any feature, the tool allows a user to = express high-level placement dynamics in smaller terms. Sets of unique = renderings can thus be produced from a single specification - automating = the process of constructing sets of test sceneries. Chair: Dr. Hans-Peter Bischof Reader: Dr. Carl Salvaggio Observer: Rolando Raque=F1o Masters Project Announcement: Implementation of Cooperative Caching = Algorithms Using Remote Client Memory by Darshan Kapadia Title: Implementation of Cooperative Caching Algorithms Using Remote = Client Memory Candidate: Darshan Kapadia E-mail: darshankapadia@mail.rit.edu Defence Date: 5/18/2011 Time: 11:00 am Location: GCCIS CS 70-3576 URL: https://sites.google.com/a/g.rit.edu/ms-project/home Abstract: As technology is advancing, processor=92s speed is increasing much = faster than disk access speed. So it becomes necessary to decrease the = number of disk accesses by the distributed file system to gain overall = performance improvement of the system. As network speed has increased = tremendously in recent time, accessing data from remote client machine = memory is faster than accessing data from the disk. Thus file caches = from number of client machines connected over a high speed network can = be combined together to form a global cache, this is called cooperative = caching. In cooperative caching when data is requested and if the data = is not present on client=92s local cache, the request is satisfied by = another client machine cache if possible. In a typical client server = based distributed file system there are 3 levels in memory hierarchy: = server disk and server memory, client memory. Cooperative cache can be = seen as a fourth level of cache in the distributed file system. The = project will implement the following five different read only = cooperative caching algorithms using remote client memory: Direct Client = Cooperative caching, Greedy Forwarding Algorithm, Centralized = Coordinated Cache Algorithm, N-Chance Forwarding Algorithm and N-Chance = Forwarding with Centrally Coordinated Cache Algorithm. The project will = then compare the performance of the algorithms based on different = evaluation metrics such as average read performance, number of disk = access, global and local hit ratio. These algorithms though simple in = implementation and without major architecture changes can provide really = good read performance by decreasing the number of disk accesses and = hence reduce total access time. Chair: Prof. Hans-Peter Bischof Reader: Prof. James Heliotis Observer: Prof. James Minseok Kwon Masters Project Announcement: Flight Path Editor by Douglas Roberts Title: Flight Path Editor Candidate: Douglas Roberts E-mail: dsr3464@cs.rit.edu Defence Date: 5/20/11 <---- new date Time: 10:00 am Location: 70-3576 URL: http://www.cs.rit.edu/~dsr3464/mastersproj/index.html Abstract: This project focuses on the design and implementation of a flight path = editor that will be used define a flight path of a camera for use in the = simulations of the Spiegel project. The Spiegel project currently has a = flight path editor but it is in need of improvement. The goal of this = project is to create a flight path editor that will be more user = friendly than the current editor. The new editor has been implemented = using Java and Java 3D. The new editor has been compared against the = current editor as well other similar commercial editors. This comparison = was done by asking a number of users to perform similar tasks in each = editor and then give a rating on how easy it was to perform a given task = in each editor. Each user was asked to choose a rating between 1 and 10 = where 1 is not easy and 10 is very easy. These ratings have been = analyzed and the users felt that the new flight path editor was easier = to use than the current flight path editor. Chair: Hans-Peter Bischof Reader: Reynold Bailey Observer: Joe Geigel Masters Project Announcement: Improving hint-based cooperative caching = using collective caching for Distributed File Systems by Sakshar Thakkar Title: Improving hint-based cooperative caching using collective caching = for Distributed File Systems Candidate: Sakshar Thakkar E-mail: sht9606@rit.edu Defence Date: 05/23/2011 Time: 11:00 am Location: GCCIS CS 70-3576 URL: http://cs.rit.edu/~sht9606 Abstract: There are many applications now-a-day which require dealing with = exceedingly large sizes of files. Distributed File System is one of the = solutions to manage such files. Access latency of files and network = traffic are main reasons impeding the performance of any Distributed = File System. Caching algorithms and collaborative caching algorithms in = particular are very useful in improving the overall performance of = Distributed File System. Among various collaborative caching = algorithms, it is very difficult to find one algorithm that provides = great performance in each of the above-mentioned aspects. The main challenge in collaborative caching is to maintain and provide = the metadata about the content in the caches of co-operative clients. = The common solution to this problem is having a manager node that = maintains the accurate data and serves the client requests. Involvement = of such node incurs delay in accessing block. Also, it does not scale = well in case of large number of request. Hint-based cooperative caching = provides a mechanism by which the metadata is stored among the = participating clients in form of hints, so overhead on manager is = reduced and the block access time is improved. It uses global LRU = algorithm for replacement of blocks when the cache is full. The = performance of hint-based cooperative caching can be enhanced using = better replacement algorithm which not only consider the age of the = block but also consider the frequency of the block usage. The I/O = latency could be further improved by incorporating collective caching. = In collective caching, all the clients behave as different processes and = work in parallel to facilitate any client=92s request. The goal of this = project is to implement the modified and enhanced version of hint-based = cooperative caching algorithm with the collective caching and frequency = based replacement policy to achieve improved average block access time = and increased cache hit ratio. Chair: Prof. Hans-Peter Bischof Reader: Prof. James Minseok Kwon Observer: Prof. Matthew Fluet Masters Project Announcement: Real time Hair Rendering and Animation by = Arunkumar Devadoss Title: Real time Hair Rendering and Animation Candidate: Arunkumar Devadoss E-mail: axd6601@rit.edu Defence Date: 23 May 2011 Time: 10:15 am Location: Graphics Lab URL: http://www.cs.rit.edu/~axd6601/ Abstract: Computer Graphics has come a long way in the past 20 years. There is = always a high demand for the characters produced in the motion pictures = and games to have a high level of photorealism. The complexity of the = character is directly proportional to the photorealism, which results in = computation intensive tasks. The more photorealistic the model = looks-like, the more is the complexity of the character. Human face, = Hair and Water Simulation are among the most complex models to create = and animate. The complexity of these models is high due to the level of = detail that requires animation of large number of points on the face = surface or large number of individual hair strands. Hair Animation is = the one of the most complex system to animate after the facial rendering = and animation (Rephrase this line) Hair Animation is a very complex = animation system. A human being has on an average 100,000 hair strands. = Each hair strand must be rendered and animated for free flowing or = natural movement. The main hypothesis of this project is that millions of hair strands can = be rendered and animated efficiently using the power of parallel = processing on the GPU. This can be a used in games and motion pictures = without waiting for many hours to render the hair strands. To support = the hypothesis, my project consists of two phases: modeling and = animation. In the modeling phase, a generic hair strand function to = create hair strands in a given area that has a configurable parameter = for the number of hair strands. The hair strands are divided into points = and interpolated with the Cubic Hermites Spline method to give a = curvilinear look. Individual hair strands shall be tested for = intersection with light source and other hair strands to produce a = self-shadowing effect. The animation phase will use the Verlet Integration algorithm that would = update the current position of the hair strands to the new position, = considering the external forces like gravity, wind, character head = motion and other forces. The new position is decided based on collision = of the points on the hair with points on other hair strands, the head of = the character and other external forces or objects. This phase will = provide a natural movement of the hair strands. Chair: Prof. Joe Geigel Reader: Prof. Reynold Bailey Observer: Prof. Warren R. Carithers Masters Project Announcement: Distributed High Performance File System = by Vishvajit Sonagara Title: Distributed High Performance File System Candidate: Vishvajit Sonagara E-mail: vish2me@gmail.com Defence Date: 05/27/2011 Time: 1:00 pm Location: RIT URL: https://sites.google.com/site/vishvajitsonagara/ Abstract: File System is the most efficient mechanism to store and retrieve files. File System is hierarchical storage mechanism for files which makes organizing files very easy. But sharing files over network is always a cumbersome task. Network bandwidth, file size and host CPU processing power is some of the denominators, especially in the applications where continuous and speedy access to the files is required. Component failures are very common in computer systems. Human error, application bug, operating system bug, disks, memory, connectors, network and power supplies have been seen causing problems. I have implemented high performance distributed file sharing mechanism to address these issues. High Performance File System is an efficient file sharing mechanism where more than one nodes host files chunks based on their rational ranking. The proposed mechanism uses mixture of database systems and file system to effectively store, retrieve and organize files on multiple nodes. These mechanisms increase availability, accessibility and scalability of file system. DHPFS allows access to files shared across multiple hosts through computer network. DHPFS enables resource sharing between distributed nodes, thus improving overall performance and throughput. Due to resource limitation in single server file system, distributed server systems are widely used in high speed web applications. Apache HADOOP and Google File System are examples of a scalable distributed file system for large distributed data-intensive = applications. Chair: Hans=E2=80=90Peter Bischof Reader: Minseok Kwon Observer: Matthew Fluet Masters Project Announcement: Distributed High Performance File System = with Client Cooperative Cache by Yeshvanth Mirle Jayaprakash Title: Distributed High Performance File System with Client Cooperative = Cache Candidate: Yeshvanth Mirle Jayaprakash E-mail: yxm8138@rit.edu Defence Date: 05/27/2011 Time: 11:00 am Location: GCCIS CS 70-3576 URL: http://people.rit.edu/~yxm8138/project.html Abstract: For large distributed data-intensive applications, there is a need for = Distributed High Performance File System (DHPFS) that can efficiently = access large files stored in a distributed environment and provide = better throughput to clients. Applications that access and process huge = amount of data need an infrastructure to store and share data across = mutiple servers, So Distributed File System (DFS) can be used to share = resources and access data efficiently. In this paper, we present DHPFS = which stores all meta-data and control information on meta-server, data = blocks that are distributed among multiple storage servers, and client = cooperative caching to serve block requests from different participating = clients. This DHPFS project intend to provide better performance for large number = of clients trying to access blocks concurrently and also significantly = reduce load on storage servers as the number of client participation = increases. With client cooperative caching, each client can retrieve = data blocks directly from remote memory instead of going to storage = servers or disks. Chair: Dr. Hans-Peter Bischof Reader: Dr. Minseok Kwon Observer: Dr. Leon Reznik= Masters Project Announcement: Large File Caching in The Spiegel File = System by Durgesh B Alapati Title: Large File Caching in The Spiegel File System Candidate: Durgesh B Alapati E-mail: dbgarikipati@gmail.com Defence Date: 06/07/2011 Time: 10:00 am Location: GCCIS CS 70-3576 (Break Out Room 3) URL: http://www.cs.rit.edu/~dbg7469/index.htm Abstract: The Spiegel project is a powerful visualization system designed to = visualize 3-D data in space. The data visualization process involves = massive amounts of data processing and transfer in the order of = terabytes. Accessing such large files over the network from a file = system is challenging considering the limitations of memory, processing = speed and network throughput. In addition to these, a very familiar = problem of disk I/O operations can hinder the system=92s performance. = Earlier work on such large file systems has been completed to achieve = high performance by distributing files on multiple servers, parallel = processing, and applying compression techniques. Faster access of data = can be achieved by parallel processing, but the performance of the = Spiegel file distribution system depends on the processing time for = visualization on the client side. The objective of the project is to = increase the performance of the system by ensuring that the = visualization process will have continuous supply of data from network = with minimum possible latency. This can be achieved by caching the = requested data to be sent beforehand on the servers and also on the = receiving end while the client is processing the previously received = data. The primary goal of this project is to analyze the performance of = the system at different cache sizes and processing times and test the = load scalability when executing multiple clients and data servers. Chair: Dr, Hans Peter Bischof Reader: Dr. Minseok Kwon Observer: Dr. James E Heliotis Masters Project Announcement: Functional Programming Applied to Web = Development Templates by Justin Cady Title: Functional Programming Applied to Web Development Templates Candidate: Justin Cady E-mail: jtc8026@rit.edu Defence Date: May 31, 2011 Time: 1:00 pm Location: CS Breakout Room 4 URL: http://masters.justincady.com/ Abstract: In most web applications, the model-view-controller (MVC) design pattern = is used to separate concerns of a system. However, the view layer, also = referred to as the template layer, can often become tightly coupled to = the controller. This lack of separation makes templates less flexible = and reusable, and puts more responsibility in the controller's hands = than what is ideal. A better solution is to give the template layer more = control over the data it presents and limit the controller's influence = on presentation. The jtc template engine uses elements of functional = programming to empower templates by treating templates as functions. Chair: Professor Matthew Fluet Reader: Professor James Heliotis Observer: Professor Dan Bogaard Masters Project Announcement: Stage Lighting Visualization Using Real = Time-Global Illumination by Stephen Sarnelle Title: Stage Lighting Visualization Using Real Time-Global Illumination Candidate: Stephen Sarnelle E-mail: sjs5688@rit.edu Defence Date: Wednesday, June 8 Time: 2:00 pm Location: 70-3600 (Graphics Lab) URL: http://www.cs.rit.edu/~sjs5688/MSProject/ Abstract: The purpose of this project was to investigate the use of global = illumination in a real-time setting. The main task was to design and = build a software system that will accurately model time dependent = lighting effects at an acceptable real-time frame rate. The software = system developed in this project was made to assist in the design of = stage lighting systems. In order to speed up the system, rendering is = performed on a GPU using programmable shaders. The method of global = illumination used is Photon Mapping, which disperses virtual light = throughout a three-dimensional scene prior to rendering. Once a photon = map is built, it is then be passed to a GPU with shaders to render the = scene. While a working implementation is possible, the current state of = graphics processors and GLSL hampers the feasibility of real-time frame = rates. Chair: Professor Joe Geigel Reader: Professor Reynold Bailey Observer: Professor Minseok Kwon Masters Project Announcement: Genetic Algorithm for the Travelling = Salesman Problem on Hadoop by Akshat Mishra Title: Genetic Algorithm for the Travelling Salesman Problem on Hadoop Candidate: Akshat Mishra E-mail: axm1820@rit.edu Defence Date: 16/6/2011 Time: 1:00 pm Location: CS Break Out Room Abstract: The Travelling Salesman Problem is one of the hardest and the most = fundamental problems in Computer Science. The problem is NP-hard and so = far no efficient algorithm has been found. The TSP has many applications = in the industrial world like transportation, factory set ups, and = industrial design etc. Several techniques have been used in = the past to reduce the running time of the Travelling Salesman Problem = such as Tabu search, heuristic search, Ant algorithms, Linear = Programming and Genetic algorithms. Genetic algorithms have been = successfully utilized for solving NP-hard problems in the last 30 years. = They can reduce the running times of NP-complete problems substantially. = Genetic algorithms have the capability of being parallelized. In the = recent past, plenty of work has been done in this area. A popular = parallel programming paradigm currently used called MapReduce runs on = the commodity hardware. There are several MapReduce frameworks = available, but Hadoop is one of most popular frameworks because of its = robust, well designed and scalable file system. In this project, I have designed = and implemented a genetic algorithm and further I have designed a = MapReduce job on Hadoop for the genetic algorithm. The performance of = the sequential genetic algorithm has been compared with a previous = implementation. Experiments have been performed to measure the speed up = of Map Reduce job on different number of nodes and types of instances on = Hadoop. The implementation of this project does not need high-end = architecture and can run on commodity hardware. A new approach has been = attempted in this project to reduce the running time of Travelling = Salesman Problem using a genetic algorithm and parallelizing it on = Hadoop. Chair: Ivona Bez=E1kov=E1 Reader: Joe Geigel Observer: Minseok Kwon Masters Project Announcement: Genetic Algorithm for the Travelling = Salesman Problem on Hadoop by Akshat Mishra Title: Genetic Algorithm for the Travelling Salesman Problem on Hadoop Candidate: Akshat Mishra E-mail: axm1820@rit.edu Defence Date: 6/16/2011 Time: 1:00 pm Location: 70-3576 (BOR 3) <---- this is updated URL: http://www.cs.rit.edu/~axm1820/Project_Report_Final.pdf Abstract: The Travelling Salesman Problem is one of the hardest and the most = fundamental problems in Computer Science. The problem is NP-hard and so = far no efficient algorithm has been found. The TSP has many applications = in the industrial world like transportation, factory set ups, and = industrial design etc. Several techniques have been used in = the past to reduce the running time of the Travelling Salesman Problem = such as Tabu search, heuristic search, Ant algorithms, Linear = Programming and Genetic algorithms. Genetic algorithms have been = successfully utilized for solving NP-hard problems in the last 30 years. = They can reduce the running times of NP-complete problems substantially. = Genetic algorithms have the capability of being parallelized. In the = recent past, plenty of work has been done in this area. A popular = parallel programming paradigm currently used called MapReduce runs on = the commodity hardware. There are several MapReduce frameworks = available, but Hadoop is one of most popular frameworks because of its = robust, well designed and scalable file system. In this project, I have designed = and implemented a genetic algorithm and further I have designed a = MapReduce job on Hadoop for the genetic algorithm. The performance of = the sequential genetic algorithm has been compared with a previous = implementation. Experiments have been performed to measure the speed up = of Map Reduce job on different number of nodes and types of instances on = Hadoop. The implementation of this project does not need high-end = architecture and can run on commodity hardware. A new approach has been = attempted in this project to reduce the running time of Travelling = Salesman Problem using a genetic algorithm and parallelizing it on = Hadoop. Chair: Ivona Bez=E1kov=E1 Reader: Joe Geigel Observer: Minseok Kwon Masters Project Announcement: Scientific Visualization Using Pixar's = RenderMan by John Lukasiewicz Title: Scientific Visualization Using Pixar's RenderMan Candidate: John Lukasiewicz E-mail: jxl6110@rit.edu Defence Date: 6/29/2011 Time: 10:00 am Location: Graphics Lab URL: http://www.cs.rit.edu/~jxl6110/ Abstract: This thesis will attempt to visualize astrophysical data that is = proprocessed and formatted by the Spiegel software using Pixar=E2=80=99s RenderMan. The = output will consist of a large set of points and data associated with each point. The goal = is to create images that are both informative and aesthetically pleasing to the = viewer. This has been done many times before with software rendering and APIs such as = OpenGL or JOGL. This thesis will use Pixar=E2=80=99s Photorealistic RenderMan, or = PRMan for short, as a renderer. PRMan is an industry proven standard renderer that is = based on the RenderMan Interface Speci=EF=AC=81cation which has been in = development since 1989. The original version was released in September of 1989 and the latest = speci=EF=AC=81cation, version 3.2 was published in 2005. Since aesthetics is a subjective quality based on the viewers=E2=80=99 = preference, the only way to determine if an image is aesthetically pleasing is to survey = a general population. The thesis includes an experiment to assess the quality of = the new renders. Chair: Professor Hans-Peter Bischof, Ph.D. Reader: Professor Joe Geigel, Ph.D. Observer: Professor Reynold Bailey, Ph.D. Masters Project Announcement: Evolutionary algorithm for generation of = the stock trading rules by Ainur Bazarbekova Title: Evolutionary algorithm for generation of the stock trading rules Candidate: Ainur Bazarbekova E-mail: ainur.bazarbekova@gmail.com Defence Date: 07-07-2011 Time: 10:00 am Location: 70-3672 URL: http://www.cs.rit.edu/~axb6594/ Abstract: Evolutionary algorithms in recent years have become a very popular field = of research. Attractiveness of this approach includes in the ability to = generate the optimal solution in the case of uncertainty. Evolutionary = algorithms are inspired by the evolutionary processes in the nature =96 = survival of the strongest in the population. Now the evolutionary = programming is applicable for the evolution of the artificial systems, = such as artificial neural networks, fuzzy systems etc. In this work, we are going to apply an evolutionary algorithm to evolve = a fuzzy system applicable to the financial domain =96 for stock trading = rules generation. Stock market traders speculate in the market and try to obtain profit = based on the price changes of securities. One of the basic approaches = for trading in the stock market, which is widely used in practice, is = technical analysis. Technical analysis, unlike the fundamental approach, = assumes that the stock price changes and the patterns of behavior can be = predicted solely on the base of the market information (stock prices and = stock volume changes) for the previous period. Technical analysis rules = are synthesized by using the notions of simple moving average, double = moving averages and other characteristics of the charts. In this work = our objective is generating the trading rules similar to technical = analysis rules which allow for obtaining a high profit based on the = price and volume history data. For achieving this objective we use an = evolutionary algorithm to generate the fuzzy system, consisting of the = set of the fuzzy rules. Chair: Dr. Roger Gaborski Reader: Dr. Joe Geigel Observer: Dr. Yuheng Wang Masters Project Announcement: Reducing Redundant Contract and Exception = Handling Code with Annotations by Jesse A. Lehmier Title: Reducing Redundant Contract and Exception Handling Code with = Annotations Candidate: Jesse A. Lehmier E-mail: jal6411@rit.edu Defence Date: July 5, 2011 Time: 2:00 pm Location: Break Out Room 3 (70-3576) URL: http://lehmier.com/masters/ Abstract: The initial goal of this project was to investigate alternative = methodologies for writing contracts and exception handlers, particularly = those that use aspect-oriented programming to reduce code redundancy. = Following this research endeavor I completed a study of exception = raising and handling tendencies by developing a tool to examine the = source code of different programs and to report relevant data. Following = this exception handling study, I refactored an existing application to = use aspect-oriented patterns developed in related work. Finally, I = developed an annotation library for specifying contracts and exception = handlers in addition to an annotation processor that generates = corresponding AspectJ source code. The results of each of these research = and development efforts are documented in my report. Chair: James E. Heliotis Reader: Matthew Fluet Observer: Minseok Kwon Masters Project Announcement: Identifying and Preventing Node = Replication Attack in Wireless Sensor Networks by Jeegar = Brahmakshatriya Title: Identifying and Preventing Node Replication Attack in Wireless = Sensor Networks Candidate: Jeegar Brahmakshatriya E-mail: jvb3350@rit.edu Defence Date: July 15 2011 Time: 11:00 am Location: Breakout Room 3 70-3576 URL: https://sites.google.com/a/g.rit.edu?tab=3Dm3&pli=3D1 Abstract: A challenging problem in a wireless sensor network is to establish a = secure connection between the nodes. Asymmetric key cryptosystems are = not suitable for wireless sensor networks because sensor nodes are = resource constrained and are vulnerable to physical compromise by an = adversary. One of the popular frameworks utilizes the strength of a = computer to compute keys and pre-distribute random sets of keys to each = node which are then used to establish secure a connection. The goal of this project is to analyze and enhance the random pairwise = key pre-distribution scheme developed by C. Haowen, A. Perrig and D. = Song. The random pairwise key pre-distribution scheme is venerable to a = node replication attack. This project report addresses this venerability = and provides an enhancement using a voting scheme. The enhanced scheme = is implemented on Sun SPOT wireless sensors and subjected to various = levels of node replication attacks. A simulation program was also = developed for testing the scheme in different sizes of sensor networks. Chair: Dr. Minseok Kwon Reader: Dr. Leon Reznik Observer: Dr. Ivona Bezakova Masters Project Announcement: Stage Lighting Visualization Using Real = Time-Global Illumination by Stephen Sarnelle Title: Stage Lighting Visualization Using Real Time-Global Illumination Candidate: Stephen Sarnelle E-mail: sjs5688@rit.edu Defence Date: Thursday July 14 (RESCHEDULED) Time: 1:00 pm Location: 70-3600 (Graphics Lab) URL: http://www.cs.rit.edu/~sjs5688/MSProject/ Abstract: The purpose of this project was to investigate the use of global = illumination in a real-time setting. The main task was to design and = build a software system that will accurately model time dependent = lighting effects at an acceptable real-time frame rate. The software = system developed in this project was made to assist in the design of = stage lighting systems. In order to speed up the system, rendering is = performed on a GPU using programmable shaders. The method of global = illumination used is Photon Mapping, which disperses virtual light = throughout a three-dimensional scene prior to rendering. Once a photon = map is built, it is then be passed to a GPU with shaders to render the = scene. While a working implementation is possible, the current state of = graphics processors and GLSL hampers the feasibility of real-time frame = rates. Chair: Professor Joe Geigel Reader: Professor Reynold Bailey Observer: Professor Minseok Kwon Masters Project Announcement: Intelligent Pruning for Partially = Connected Neural Networks with Application to Classification Problems by = Anupam Choudhari Title: Intelligent Pruning for Partially Connected Neural Networks with = Application to Classification Problems Candidate: Anupam Choudhari E-mail: aac3610@rit.edu Defence Date: 25th July 2011 Time: 3:00 pm Location: TBA URL: http://www.cs.rit.edu/~aac3610/Project.html Abstract: The fully connected architecture in neural nets is extremely superficial = in the sense that, it can implement any function if enough nodes are = used. But learning for a large-scale task is too time consuming. Also, = as the size of the network scales up, maintaining full connectivity is = hard to maintain due to the limits imposed by physical restrictions. = Partially connected neural networks (PCNN) help to minimize the above = limitations. These are constructed by intelligently pruning unused and = redundant nodes from their fully connected counterparts. In this project = two different pruning strategies are implemented. The first one involves = a genetic algorithm approach to prune or =93evolve=94 the connections = and nodes in the network. The next optimization technique uses simulated = annealing and tabu search to generate smaller topologies of the network. = As an investigation result, a novel technique is introduced. This uses = the genetic algorithm after process of simulated annealing and tabu = search to make the network more optimized for training. All these tuned = networks are tested on three separate classification datasets and the = results are compared against the fully connected network from which they = were pruned. Chair: Dr. Leon Reznik Reader: Dr. Richard Zanibbi Observer: Dr. Hans-Peter Bischof Masters Project Announcement: Intelligent Pruning for Partially = Connected Neural Networks with Application to Classification Problems by = Anupam Choudhari Title: Intelligent Pruning for Partially Connected Neural Networks with = Application to Classification Problems Candidate: Anupam Choudhari E-mail: aac3610@rit.edu Defence Date: 25th July 2011 Time: 3:00 pm Location: Breakout Room 3 <--- update URL: http://www.cs.rit.edu/~aac3610/Project.html Abstract: The fully connected architecture in neural nets is extremely superficial = in the sense that, it can implement any function if enough nodes are = used. But learning for a large-scale task is too time consuming. Also, = as the size of the network scales up, maintaining full connectivity is = hard to maintain due to the limits imposed by physical restrictions. = Partially connected neural networks (PCNN) help to minimize the above = limitations. These are constructed by intelligently pruning unused and = redundant nodes from their fully connected counterparts. In this project = two different pruning strategies are implemented. The first one involves = a genetic algorithm approach to prune or =93evolve=94 the connections = and nodes in the network. The next optimization technique uses simulated = annealing and tabu search to generate smaller topologies of the network. = As an investigation result, a novel technique is introduced. This uses = the genetic algorithm after process of simulated annealing and tabu = search to make the network more optimized for training. All these tuned = networks are tested on three separate classification datasets and the = results are compared against the fully connected network from which they = were pruned. Chair: Dr. Leon Reznik Reader: Dr. Richard Zanibbi Observer: Dr. Hans-Peter Bischof Masters Project Announcement: Statistical and Performance Analysis of = SHA-3 Candidates by Ashok Vepampedu Karunakaran Title: Statistical and Performance Analysis of SHA-3 Candidates Candidate: Ashok Vepampedu Karunakaran E-mail: axv9713@rit.edu Defence Date: 08/15/2011 Time: 10:00 am Location: 70-3576 (Breakout room #3) URL: http://sha3project.weebly.com/ Abstract: A hash function takes input data, called the message, and produces a = condensed representation, called the message digest. Security =EF=AC=82aws= have been detected in some of the most commonly used hash functions = like MD5 (Message Digest) and SHA-1 (Secure Hash Algorithm). Therefore, = NIST started the design competition for a new hash standard to be called = SHA-3. The SHA-3 competition is currently in its =EF=AC=81nal round with = =EF=AC=81ve candidates remaining. The following is a gist of the tasks = that were carried out for the project: =E2=80=A2 Randomness - A good hash function should behave as close to a = random function as possible. Statistical tests help in determining the = randomness of a hash function and NIST recommends a series of tests in a = statistical test suite for this purpose. This tool has been used to = analyze the randomness of the =EF=AC=81nal =EF=AC=81ve hash functions. =E2=80=A2 Performance - It is another one of those critical factors that = determines a good hash function. Performance of the all the fourteen = Round 2 candidates was measured using Java as the programming language = on Sun platform machines for small sized messages. No such tests have = been carried out with this combination. =E2=80=A2 Security - Security is the most important criteria when it = comes to hash functions. Gr=C3=B8stl is one of the =EF=AC=81nal =EF=AC=81v= e candidates and its architecture, design and security features have = been studied in detail. Some of the successful attacks on reduced = versions have been explained. Also, the lesser known candidates, Fugue = and ECHO, from Round 2 have been studied. Chair: Prof. Stanislaw Radziszowski Reader: Prof. Peter Bajorski Observer: Prof. Christopher Homan Masters Project Announcement: Bounded Rationality and Human Behavior in = Normal-Form Games by Peter Mahon Title: Bounded Rationality and Human Behavior in Normal-Form Games Candidate: Peter Mahon E-mail: pgm2789@rit.edu Defence Date: August 23, 2011 Time: 3:00 pm Location: 70-3576 URL: http://www.cs.rit.edu/~pgm2789/project.html Abstract: The goal of this project was to use eye-tracking to uncover hidden, = preconscious strategies of human players during play of a series of = computer-generated simultaneous normal-form games. Four normal-form = games were designed and used as the test bed for the eye-tracking = experiment: the Coordination Game (CO), Battle of the Sexes (BS), the = Game of Chicken (CH), and Prisoner=92s Dilemma (PD). These games are = abstractions of real-life scenarios where a person must make a choice to = either cooperate with another person for some common good, or not = cooperate, given a specific =93payoff=94 for cooperating or not = cooperating. For this project, the other player was always an automated = agent whose goal was to learn the strategies of the human players. The = agent=92s ability to learn a player=92s strategy is based upon a numeric = index that strongly correlates to the probability of a specific choice. = The index was calculated from the discrete values of a payoff matrix for = a particular game. Existing research suggests that such an index exists = for Prisoner=92s Dilemma; however, a numeric index for the other games = has not yet been found. The index, termed the cooperation index (CI), is = experimentally confirmed in this project. This project also designed and = tested a new index for the Game of Chicken called the risk index (RI). = For Prisoner=92s Dilemma, players were found to cluster into two main = types: those whose play follows the cooperation index, and those whose = play does not follow the cooperation index. Eye-tracking data collected = during this project confirms that attention deployed to particular areas = of interest (AOIs) varies according to which type the player belongs to. = For the remaining three games, a new metric specific to each game was = created to indicate the probability that the player will make a choice = based on his or her past history of choices and the sequence of moves = made. Again, two types of players were found: those who were likely to = choose a specific option, and those who were likely to choose a = different option. Eye-tracking metrics also showed differences between = the two types, which enabled a decision tree to be created from the = eye-tracking data. The results from the decision tree classification = were used by the automated agent to classify each player as a specific = type, thereby allowing a prediction to be made about a player=92s likely = choice. Classifying a player based on his or her past history of moves = combined with eye-tracking metrics may help to improve artificial = agents=92 game-playing behavior, either for the benefit of the player or = as a competitor. Chair: Dr. Roxanne L. Canosa Reader: Dr. Reynold Bailey Observer: Dr. Warren Carithers Masters Project Announcement: Caching Techniques in Distributed File = Systems by Sagar Kotak Title: Caching Techniques in Distributed File Systems Candidate: Sagar Kotak E-mail: svk3988@rit.edu Defence Date: 09/26/2011 Time: 2:00 pm Location: 70-3576 URL: https://sites.google.com/site/mastersproject123/ Abstract: The main idea of the distributed file system is to distribute files = over the network among different clients. Various clients will access = these. In order to read these files, each client takes up a lot of = time, latency and higher bandwidth consumption. Hence in order to reduce = latency and bandwidth consumption an alternative mechanism is required = to avoid such problems. Caching techniques is one such way which helps = to maintain reliability and consistency. It helps to store and manage = various data files and access these files at a faster speed and provide = better results. Caching helps to store a large amount of data to avoid = direct access of the storage device, thus by reducing bandwidth = consumption and network latency. In this way, there are more requests = served each time a client initiates a request. Caching can be deployed in various different architectures like = Client and Server architecture and P2P architecture. There are a lot of = different caching techniques but out of those the project would simulate = the following three different caching techniques: Hint based Caching = algorithm, Hint based Predictive Prefetching Caching algorithm and = Peer-to-peer Global Caching algorithm. The project will then compare the = performance of the algorithms based on the evaluation metrics like an = average access time of block and cache hit rate and plot the results = about their performance in the form of graphs by using the above caching = techniques. Chair: Dr. Hans-Peter Bischof Reader: Dr. Minseok Kwon Observer: Dr. James Heliotis Masters Project Announcement: Adaptive Genetic Algorithm for Path = Planning of Mobile Robots by Lokesh Jain Title: Adaptive Genetic Algorithm for Path Planning of Mobile Robots Candidate: Lokesh Jain E-mail: lokesh.jain@gmail.com Defence Date: Oct 3rd, 2011 Time: 10:00 am Location: CS Break room 3 - GOL-3576 URL: http://www.cs.rit.edu/~lxj9041/MSProject.html Abstract: The MS project provides an adaptive approach for the planning of = collision free paths of mobile robots using genetic algorithms. This = Adaptive Genetic Algorithm (AGA) adds decision making capability to the = individual Genetic Algorithm (GA) techniques of crossover and mutation = to consider the obstacles present in the environment for determining = optimal paths. In a GA for path planning, the population is comprised of = individual chromosomes, each of which represents a probable path of the = mobile robot. The application of AGA ensures that the evolved population = is not limited to the combinations of the initial chromosome pool, but = alters the selected chromosome/path to avoid the obstacles around it, = while planning the shortest path. The simulated environment explanation = along with the implementation details are provided for the AGA. The = experimentation is carried out by varying GA parameters and comparing = the results against a non-adaptive GA. Chair: Prof. Zack Butler Reader: Prof. Leon Reznik Observer: Prof. James E. Heliotis Masters Project Announcement: Data Interpretation and Analysis in = Wireless Sensor Networks by Savvithri Sivaraamakrishnan Title: Data Interpretation and Analysis in Wireless Sensor Networks Candidate: Savvithri Sivaraamakrishnan E-mail: sxs3286@rit.edu Defence Date: 10-17-2011 Time: 10:00 am Location: Rm-3400 URL: http://people.rit.edu/sxs3286/ Abstract: Wireless Sensor Networks (WSN) consists of a set of sensor nodes that = are generally deployed to monitor environmental conditions like = temperature, rain, humidity and so on. They play a very important role = in applications such as habitat monitoring or wildlife monitoring where = human intervention is not possible. A sensor network deployed for such = purposes may have a lot of sensor nodes deployed randomly and = interconnected to each other. These sensor nodes record the physical = conditions at regular intervals. The network data may not always follow = a regular pattern. The data they collect are in turn sent to the base = station. In such a case, there is a huge amount of data that gets = collected at the base station. Historical data becomes very important to predict the weather conditions = of a region. The quality of data being transmitted to the base station = is also very important for the user to make certain critical decisions. = So we would like to provide some meaningful representations of the data. = This would help the user to understand the environmental condition = better. Thus the goal of the project is to interpret and visualize the = data also providing the user some information about the quality of the = data. The data that is collected at the base station can be recorded and = sent to a database management system. The data can be retrieved from = database to make meaningful visualization for the end user to = understand. Chair: Dr. Leon Reznik Reader: Dr. Rajendra Raj Observer: Dr. Roger Gaborski Masters Project Announcement: A Relational Database Metadata Framework = and Query Tool for Naive Users by Steven B. Baylor Title: A Relational Database Metadata Framework and Query Tool for Naive = Users Candidate: Steven B. Baylor E-mail: sbb7859@rit.edu Defence Date: Tuesday, October 25, 2011 Time: 2:00 pm Location: GOL-3672 - Breakout room 4 URL: http://www.cs.rit.edu/~sbb7859/ Abstract: In industry, especially engineering environments, there often exists a = gap between the data extraction needs of relational database customers = and the capabilities of the tools that are within the typical customer=92s= usability level. The reporting and querying capabilities of custom = built database applications can fall short of the needs of users, and = the available generic data retrieval tools are either targeted to = advanced users or require substantial overhead to integrate and = maintain. I hypothesize that a powerful data retrieval system can be developed, = that is within the usability comfort level of non-database-savvy end = users, and requires minimal effort to integrate into existing database = systems. I have developed the basis for such a system, and have shown, = through usability testing, that the concept is feasible. Chair: Rajendra K. Raj Reader: Xumin Liu Observer: Carol J. Romanowski Masters Project Announcement: Wildland Fire Detection Using = Multispectral Imagery by Sobha Duvvuri Title: Wildland Fire Detection Using Multispectral Imagery Candidate: Sobha Duvvuri E-mail: sxd9404@rit.edu Defence Date: October 31, 2011 Time: 11:00 am Location: Building 70, Room 3576 URL: http://people.rit.edu/sxd9404/ Abstract: The Hybrid Contextual Algorithm was developed for wildland fire = detection using multispectral imagery by Ying Li. This algorithm takes = the multispectral images as input and differentiates the background and = non- fire pixels from the fire pixels treating this problem as a = background suppression problem. The method devised is advantageous over = the existing algorithms as it can be used by different sensors and = requires the manual setting of only two thresholds unlike the existing = algorithms. Existing algorithms are specific to a sensor and do not make = use of the inter-band information to compute statistics necessary for = differentiating the background and the actual fire pixels. Since they do = not make use of the inter-band information, they need to set multiple = thresholds for each band based upon observation. In this algorithm, we = set only two thresholds, the NTI Threshold and the Mahalanobis = Threshold. The objective of this project is to understand and implement the Hybrid = Contextual Fire Detection Algorithm such that it dovetails with the = software architecture for airborne imaging sensor system developed at = RIT known as the Wildfire Airborne Sensor Program (WASP). The = implementation reads the multispectral images as input, processes it and = gives the image highlighting actual fire pixels as output. Sensitivity = analysis is performed for a set of test images and the thresholds are = tuned based on observation. Experiments are performed to measure the = time taken for execution based on varying sizes of images. This = implementation, when integrated in the WASP workflow, will lead to = decision makers receiving timely and synoptic information of affected = areas towards designing an optimal response of personnel and materials. Chair: Dr. Hans-Peter Bischof Reader: Dr. Anthony Vodacek Observer: Dr. Jan Van Aardt Masters Project Announcement: A Study of the Parallelism and Efficiency = of the Index Calculus Algorithm by Michael R Pratt Title: A Study of the Parallelism and Efficiency of the Index Calculus = Algorithm Candidate: Michael R Pratt E-mail: mrp9521@rit.edu Defence Date: Monday, November 14, 2011 Time: 10:00 am Location: 70-3672 URL: http://www.cs.rit.edu/~mrp9521/grad/grad.html Abstract: Computing discrete logarithms in finite fields and in general groups is = considered to be a difficult problem, and many modern public-key cryptosystems are based on this = apparent difficulty. The index calculus algorithm presents a sub-exponential, parallelizable = method for solving discrete logarithms in Zp* using a pre-computed factor base. The chosen size of = the factor base can affect the size of the algorithm=92s sequential fraction, and therefore its = ability to efficiently scale with computing resources. This project explores the relationship between the = size of the factor base and the scalability of the algorithm on a distributed parallel system. For our investigation, we develop a parallel implementation of the index = calculus algorithm for solving problems in Zp*, and deploy our implementation on = distributed computing resources provided by the TeraGrid project. Our results show that scalability = generally improves with the size of the factor base, and does not indicate an optimal choice for = maximizing scalability as we expected. Chair: Stanislaw Radziszowski Reader: Ivona Bezakova Observer: Hossein Shahmohamad Masters Project Announcement: Real-Time Full Spectral Rendering by = Vinayak Suley Title: Real-Time Full Spectral Rendering Candidate: Vinayak Suley E-mail: vinasul@microsoft.com Defence Date: 11/18/2011 Time: 1:00 pm Location: TBD URL: = http://vinayaksuley.blogspot.com/2011/11/masters-project-announcement-page= .html Abstract: This project investigates the practicality of real-time full-spectral = rendering for interactive applications. The practical considerations are = feasibility of added computations required and the final visual = difference when compared with an RGB renderer. We want to test if we can = see significant increase in color accuracy with a computing-cost that is = justifiable. Chair: Dr. Joe Geigel Reader: Dr. Reynold Bailey Observer: Dr. Hans-Peter Bischof