Alexander G. Ororbia II

Contact Information:
Dept. of Computer Science
Rochester Institute of Technology
20 Lomb Memorial Dr
Rochester, NY 14623
ago AT cs DOT rit DOT edu
"The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves."
-Alan Turing
Alex, one of the Connectionists!


Eductional Background:

  • B.S.E, Computer Science & Engineering, Bucknell University
    • Philosophy Minor, Bucknell University
    • Mathematics Minor, Bucknell University
  • Ph.D., Information & Science Technology, The Pennsylvania State University
    • Social Data Analytics (SoDA) Minor, The Pennsylvania State University

I am an assistant professor in the RIT Computer Science Department and starting the Neural Adaptive Computing (NAC) Laboratory. I am also now the liason for the CS department's "Nature-Inspired and Evolutionary Computing" subarea. You can find a link to my curriculum vitae here and a link to my Google Scholar page as well as my Research Gate page (which is much more accurate in terms of paper citations than Google Scholar). I am also quazi-active on Quora (a Question-Answer forum/website). I fall under the RIT Computer Science Department's Artificial Intelligence Cluster (or Intelligent Systems Cluster).
The Temporal Neural Coding Network.
The proposed Temporal Neural Coding Network (TNCN) for continual (sequence) learning and modeling (Ororbia et al., 2018).

News

  • June 25, 2019: Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations accepted as full article in IEEE Transactions on Neural Networks and Learning Systems (TNNLS 2019)
  • May 14, 2019: Like a Baby: Visually Situated Neural Language Acquisition accepted for publication at the 57th Annual Meeting of the Association for Computational Linguistics conference (ACL 2019)
  • March 20, 2019: Investigating Recurrent Neural Network Memory Structures using Neuro-Evolution accepted as full paper for publication at The Genetic & Evolutionary Computation Conference (GECCO 2019)
  • March 11, 2019: A Neural Temporal Model for Human Motion Prediction accepted for publication at the Conference on Computer Vision and Pattern Recognition (CVPR 2019)
  • December 27, 2018: Learned Neural Iterative Decoding for Lossy Image Compression Systems accepted as full paper for publication at the 2019 Data Compression Conference proceedings (DCC 2019)
  • October 31, 2018: Biologically Motivated Algorithms for Propagating Local Target Representations, accepted for publication at the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)
  • June 15, 2018: Dissertation defense passed: Coordinated Local Learning Algorithms for Continuously Adaptive Neural Systems (Ororbia)

    Note to Ph.D. Applications: Position Details

    I am looking for motivated, talented, and enthusiastic PhD students to work in the area of machine learning, specifically on developing more neurocognitively plausible approaches to adaptation and memory formation and retention in artificial neural systems. This would entail investigating and formalizing more realistic forms of neural computation, including models of spiking neurons. A student applying to me would be interested in biologically-motivated forms of optimization and in developing models of predictive coding with a focus on continual (machine) learning. Please contact me with a short description about yourself, your interests specifically as they are related to the details of this position, and your CV. (In addition, if you are well-versed with Scala in addition to Python, certainly make that clear to me.)

    Research Interests

    • Lifelong machine learning (LML), also referred to as never-ending learning, lifelong learning, and continual learning: See the lab page for further details
    • Biologically-inspired learning algorithms, statistical learning procedures (especially for semi-supervised/stream-based/multi-modal learning)
    • Artificial neural networks:
      • Spiking neural networks, recurrent networks, neural memory models
      • Neural probabilistic/generative models, particularly Boltzmann machines and variational autoencoders
    • Nature-inspired metaheuristic optimization, including ant colony optimization, particle swarm optimization, and neuro-evolution

    Research Statement

    The focus of my work is on lifelong learning--an important and challenging open problem for machine learning. I study representation learning and draw insights from cognitive neuroscience to create intelligent systems that are ultimately meant to improve their performance in online, semi-supervised, real-world environments. Statistical learning programs today perform well on very constrained, narrowly-defined tasks but struggle and fail when required to extract/aggregate knowledge across multiple tasks (consisting of data from multiple modalities) and to deal with non-stationary, one-shot, and zero-shot learning environments. My mission is to develop the learning algorithms and models needed to create such general-purpose, adaptive agents.

    It is the endeavor of my research group, the Neural Adaptive Computing (NAC) Laboratory, to synthesize key aspects of models of cognition and biological neuro-circuitry, as well as theories of mind and brain functionality, to construct new learning algorithms and architectures that generalize better to unseen data and continually adapt to novel situations. Ultimately, the hope is that by building lifelong learning machines, we might gain further insight into the workings of human intelligence itself.

    Current Students / Members of the NAC Lab

    Ankur Mali, Ph.D. student (co-advised w/ Dr. C. Lee Giles at Penn State University) - neural memory systems, learning algorithms, lifelong machine learning
    AbdElRahman ElSaid, Ph.D. student (co-advised w/ Travis Desell at Rochester Institute of Technology) - ant colony optimization, metaheuristics
    Timothy Zee, Ph.D. student (co-advised w/ Ifeoma Nwogu at Rochester Institute of Technology) - learning algorithms, interpretable neural systems, convolutional networks
    Xu Sun, MSc student - recurrent networks, time series
    Michael Peechatt, MSc student - intelligent quality assurance, ant colony optimization / machine learning
    Hitesh Ulhas Vaidya, MSc student - lifelong machine learning, convolutional networks
    William Gebhardt, BS student - lifelong machine learning, neuroevolution
    James Le, MSc student - Boltzmann machines

    Teaching

    2019-2020

  • CSCI 739: Introduction to Machine Learning (Intro to ML)
  • 2018-2019:

  • CSCI 633: Biologically Inspired Intelligent Systems
  • CSCI 630: Foundations of Intelligent Systems (Intro to AI)