De-Biasing Neural Networks

Neural networks trained on categorical data with biased training datasets may cause biased results. This issue is especially prevalent if a certain type of data is particularly difficult to acquire but exists in the wild where one would like to implement the neural net. Here, we are working on a technique to reduce sensitivity to such training inequalities.

Optimization Methods

Training deep neural networks is extremely energy intensive. By providing optimization methods which converge using less training epochs, the energy consumption of large models can be reduced drastically. Furthermore, performance and generalization can be improved significantly through better optimization techniques. I am currently working on one such optimization method which shows promising speed-ups and performance enhancements by reframing the optimization path more formally as a loss trajectory. It is my hope this technique will also provide some scientific insights into learning in deep neural networks.

Neuromorphic Computing

I am working on a neuromorphic computing system in collaboration with some at University of Illinois Urbana-Champaign and Zhejiang University. 

Spiking neural networks in neuromorphic hardware have numerous benefits over standard neural networks in their respective accelerated hardware. There are numerous reasons for pursuing spiking neural networks such as power consumption, event-driven computation, massive parallel processing, elimination of the von Neumann memory wall due to collocated memory and processing, robustness to noise, scalability, as well as basic research into artificial and biological neural networks.

Applying Insights from Neuroscience to AI (NeuroAI)

The field of neural networks began out of the knowledge garnered from neuroscience leading to the eventual rate based models most ANN use. Further insights in considering the role of glial cells and dendritic processing may be applied to improve even rate-based ANN and are being explored.