Alexander G. Ororbia II

Contact Information:
Dept. of Computer Science
Rochester Institute of Technology
20 Lomb Memorial Dr
Rochester, NY 14623
ago AT cs DOT rit DOT edu
"The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves."
-Alan Turing
Alex, one of the Connectionists!


Eductional Background:

  • B.S.E, Computer Science & Engineering, Bucknell University
    • Philosophy Minor, Bucknell University
    • Mathematics Minor, Bucknell University
  • Ph.D., Information & Science Technology, The Pennsylvania State University
    • Social Data Analytics (SoDA) Minor, The Pennsylvania State University

I am an assistant professor in the RIT Computer Science Department and starting the Neural Adaptive Computing (NAC) Laboratory. You can find a link to my curriculum vitae here and a link to my Google Scholar page as well as my Research Gate page (which is much more accurate in terms of paper citations than Google Scholar). I am also quazi-active on Quora (a Question-Answer forum/website). I fall under the RIT Computer Science Department's Artificial Intelligence Cluster (or Intelligent Systems Cluster).
The Temporal Neural Coding Network.
The proposed Temporal Neural Coding Network (TNCN) for continual (sequence) learning and modeling (Ororbia et al., 2018).

News

  • March 20, 2019 New work Investigating Recurrent Neural Network Memory Structures using Neuro-Evolution accepted as full paper for publication at the 2019 The Genetic and Evolutionary Computation Conference (GECCO 2019)
  • Ph.D. Position for Fall 2019
    • I am looking for motivated, talented, and enthusiastic PhD students to work in the area of machine learning, specifically on developing neural memory models, biologically-motivated learning and optimization algorithms, and models of predictive coding with a focus on continual (machine) learning. Please contact me with a short description about yourself, your interests, and your CV. (In addition, if you are well-versed with Scala in addition to Python, certainly make that clear to me.)
  • December 27, 2018: New work Learned Neural Iterative Decoding for Lossy Image Compression Systems accepted as full paper for publication at the 2019 Data Compression Conference proceedings (DCC 2019)
  • October 31, 2018: New work Biologically Motivated Algorithms for Propagating Local Target Representations, accepted for publication at the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)
  • June 15, 2018: Dissertation defense passed: Coordinated Local Learning Algorithms for Continuously Adaptive Neural Systems (Ororbia)

    Research Interests

    • Lifelong machine learning (LML), also referred to as never-ending learning, lifelong learning, and continual learning
    • Biologically-inspired learning algorithms, statistical learning procedures (especially for semi-supervised/stream-based/multi-modal learning)
    • Artificial neural networks:
      • Recurrent networks, neural memory models
      • Neural probabilistic/generative models, particularly Boltzmann machines and variational autoencoders

    Research Statement

    The focus of my work is on lifelong learning--an important and challenging open problem for machine learning. I study representation learning and draw insights from cognitive neuroscience to create intelligent systems that are ultimately meant to improve their performance in online, semi-supervised, real-world environments. Statistical learning programs today perform well on very constrained, narrowly-defined tasks but struggle and fail when required to extract/aggregate knowledge across multiple tasks (consisting of data from multiple modalities) and to deal with non-stationary, one-shot, and zero-shot learning environments. My mission is to develop the learning algorithms and models needed to create such general-purpose, adaptive agents.

    It is the endeavor of my research group, the Neural Adaptive Computing (NAC) Laboratory, to synthesize key aspects of models of cognition and biological neuro-circuitry, as well as theories of mind and brain functionality, to construct new learning algorithms and architectures that generalize better to unseen data and continually adapt to novel situations. Ultimately, the hope is that by building lifelong learning machines, we might gain further insight into the workings of human intelligence itself.

    Current Students / Members of the NAC Lab

    Ankur Mali, Ph.D. student (co-advised w/ Dr. C. Lee Giles at Penn State University) - neural memory systems, learning algorithms
    AbdElRahman ElSaid, Ph.D. student (co-advised w/ Travis Desell at Rochester Institute of Technology) - ant colony optimization, metaheuristics
    Xu Sun, MSc student - recurrent networks, time series
    Michael Peechatt, MSc student - finite state machines, stochastic optimization

    Teaching

    2018-2019:

  • CSCI 633: Biologically Inspired Intelligent Systems
  • CSCI 630: Foundations of Intelligent Systems (Intro to AI)