I am an Associate Professor of Computer Science at Yale University. My group studies computer architectures and systems for platforms ranging from data center servers to brain-computer interfaces. I am part of Yale's Computer Systems Lab and Interdepartmental Neuroscience Program, and am also a Fellow of Grace Hopper College.
Modern computer systems integrate diverse accelerators and memory technologies, offering significant performance but complicating the programming models that software developers are familiar with. We build systems abstractions to improve hardware programmability, and architect hardware and systems software support to implement these abstractions efficiently.
We have worked on the virtual memory abstraction with contributions to translation contiguity, memory transistency, and GPU address translation. Our work on coalesced TLBs has been integrated into AMD's chips, and our large page optimizations are now in Linux. Our work on giving GPUs direct access to storage, networking, and memory management services has influenced Radeon Open Compute's hyperscale computing stack.
We have also been building heterogeneous architectures that advance the brain sciences to help treat neurological disorders and offer a path towards more explainable and transparent AI. In our HALO project, we are taping out ultra-low-power and flexible chips for brain-computer interfaces and evaluating them using data collected on non-human primates and epilepsy patients.