![Featured image for “[MS/PhD] PhD Defense Announcement – FPGA Acceleration of Machine Learning with Homomorphic Encryption by Yang Yang”](https://www.cs.usc.edu/wp-content/uploads/2025/06/USC-Featured-Photo.png)
The following announcement is from [Yang Yang – PhD Candidate]. Please contact them directly if you have any questions.
Title: FPGA Acceleration of Machine Learning with Homomorphic Encryption
PhD Candidate: Yang Yang
Committee Members: Prof. Murali Annavaram, Prof. Rajgopal Kannan, Prof. Viktor Prasanna (chair), Prof. Weihang Wang
Date: Friday, Nov 14, 2025
Time: 12pm
Location: RTH 115
Zoom Link: https://usc.zoom.us/j/7540283446
Meeting ID: 754 028 3446
Abstract- Homomorphic Encryption (HE) enables computation directly on encrypted data, providing strong privacy guarantees for applications in healthcare, finance, and personalized services. However, the practical deployment of HE-based Machine Learning (HE ML) remains limited by high computational and memory costs. Key challenges include: (1) transforming simple operations into complex polynomial arithmetic with large moduli; (2) handling the substantial increase in memory footprint and bandwidth due to encryption; (3) supporting diverse HE parameters and application-specific latency requirements; and (4) overcoming the inefficiency of general-purpose processors, which lack hardware support for modular arithmetic and HE specific dataflows.
This dissertation develops FPGA-based solutions to address these challenges and enable efficient HE ML acceleration. We organize HE ML acceleration across multiple abstraction levels—HE primitives, HE subroutines, HE operations, HE ML operators, and end-to-end HE ML applications—and propose latency optimization techniques at each level. First, we introduce a framework that generates FPGA accelerators for HE operations through reusable primitives, subroutine fusion, and design space exploration. Second, we design an FPGA accelerator for homomorphically encrypted matrix–vector multiplication with bandwidth-efficient dataflows and multi-level parallelism. Third, we accelerate HE-based sparse convolutional neural networks using a bipartite-matching-based scheduling algorithm to improve data reuse and reduce pipeline stalls. Finally, we present an FPGA overlay accelerator integrating a domain-specific instruction set and compiler for low latency HE ML training and inference. Together, these contributions achieve substantial latency reductions and improved scalability over state-of-the-art CPU, GPU, and FPGA implementations.
Bio: Yang Yang is a Ph.D. candidate in the Department of ECE and a silicon engineer at Meta. He is advised by Prof. Viktor Prasanna. His research has been focused on parallel computing and FPGA acceleration of homomorphic encrypted machine learning applications.
Published on November 11th, 2025Last updated on November 11th, 2025
