Jiawei Zhang

Jiawei Zhang

Postdoctoral scholar in LIDS @ MIT

Massachusetts Institute of Technology

Biography

Jiawei Zhang is an incoming assistant professor in the Department of Computer Sciences at the University of Wisconsin–Madison. Currently, he is a postdoctoral scholar in the Laboratory for Information & Decision Systems (LIDS) at MIT, working with Prof. Asuman Ozdaglar and Prof. Saurabh Amin. He obtained the Ph.D. degree in Computer and Information Engineering from the Chinese University of Hong Kong, Shenzhen, with an advisory of Prof. Zhi-Quan (Tom) Luo. Previously, he obtained the B.Sc. in Mathematics (Hua Loo-Keng Talent Program) from the University of Science and Technology of China.

Interests
  • Nonlinear and convex optimization: theory and algorithms
  • Learning algorithms: robustness and generalization
  • Data-driven decision-making under uncertainty
  • New computational models for AI-driven platforms, sustainable energy systems, and signal processing
Education
  • Ph.D. in Computer and Information Engineering

    The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen)

  • B.Sc. in Mathematics (Hua Loo-Keng Talent Program)

    University of Science and Technology of China (USTC)

Research Topics

  • Efficient optimization algorithms for deterministic or stochastic, large-scale constrained optimization, minimax problems, distributed optimization, bilevel optimization, and neural network training, including deep learning and large model training.

  • Optimization and generalization for machine learning and AI, e.g., adversarial training, diffusion models, transformers.

  • Learning methods for optimization.

  • Data-driven optimization and decision-making under uncertainty, e.g., contextual optimization, robust optimization, offline reinforcement learning, and reward learning for sequential decision-making, with applications to AI, energy, cyber-physical security and signal processing.

Prospective Students

I am looking for students interested in the research areas mentioned above or related topics about optimization, machine learning and their applications to science and engineering starting in 2025. They should have either a strong background in math and good coding skills, or a strong coding ability and adept theoretical insights. Students with backgrounds in mathematics, statistics, computer science, electronic engineering, and other related fields are welcome to apply.

Recent Publications

(2024). Uniformly Stable Algorithms for Adversarial Training and Beyond. Proceedings of the 41th International Conference on Machine Learning (ICML 2024).

Cite

(2024). A Unified Linear Programming Framework for Reward Learning with Offline Human Behavior and Feedback Data. Proceedings of the 41th International Conference on Machine Learning (ICML 2024).

Cite

(2024). Stochastic Smoothed Gradient Descent Ascent for Federated Minimax Optimization. Proceedings of the 27th International Conference on Artificial Intelligence and Statistics (AISTATS 2024).

Cite

(2023). The Power of Duality Principle in Offline Average-Reward Reinforcement Learning. Proceedings of the 40th International Conference on Machine Learning Workshop on Duality for Modern Machine Learning (ICML 2023 Workshop).

Cite

(2023). Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach. Proceedings of the 40th International Conference on Machine Learning (ICML 2023).

Cite

Contact