Research

My research endeavors focus on addressing challenges within three pivotal domains- Federated Learning, User-aligned Foundation Models and and the development of scalable and efficient systems for Large Models (e.g., LLMs). I aim to contribute innovative solutions to advance these areas and propel the frontiers of knowledge in the field.

User-aligned Foundation Models

In my undergraduate thesis, I focused on aligning Foundation Models (FMs) with human preferences in text-to-image scenarios. This exploration ignited my passion for advancing human alignment in AI applications. As I look forward to pursuing a Ph.D., my focus is on extending this work to aligning Language Models (LMs). Specifically, I aim to apply these principles to enhance responses’ factual accuracy and improve text summarization. My research involves investigating techniques that integrate pre-trained FMs with domain expertise to enhance the quality of summaries through external knowledge bases or hybrid models. Additionally, I’m keen on refining user-alignment techniques for FMs, exploring alternatives to resource-intensive policy training, and contributing to the ongoing improvement of training processes.

Federated Learning

The integration of Foundation Models (FMs) into Federated Learning (FL) is a promising avenue, but it comes with challenges, particularly in resource-constrained edge devices. My interest lies in addressing the challenge of leveraging FMs for large-scale datasets on such devices. I aim to explore parallelism techniques in models and data to distribute computational load and enhance training speed. Additionally, I aim to contribute to the field by developing novel federated learning algorithms, specifically in areas like federated reinforcement learning and federated meta-learning, to handle complex datasets.