Jiawei Zhang

Jiawei Zhang

Assistant professor in CS @ UW-Madison

University of Wisconsin–Madison

Biography

Jiawei Zhang is an assistant professor in the Department of Computer Sciences at University of Wisconsin–Madison. Previously, he was a postdoctoral fellow supported by The MIT Postdoctoral Fellowship For Engineering Excellence in the Laboratory for Information & Decision Systems (LIDS) at MIT, working with Prof. Asuman Ozdaglar and Prof. Saurabh Amin. He obtained the Ph.D. degree in Computer and Information Engineering from the Chinese University of Hong Kong, Shenzhen, with an advisory of Prof. Zhi-Quan (Tom) Luo, and was honored with the Presidential Award for Outstanding Doctoral Students. He obtained the B.Sc. in Mathematics (Hua Loo-Keng Talent Program) from the University of Science and Technology of China.

Interests
  • Nonlinear and convex optimization: theory and algorithms
  • Optimization, generalization, and robustness of machine learning, reinforcement learning, generative models (including diffusion models, large models, foundation models)
  • Data-driven decision-making under uncertainty
  • New computational models for AI-driven platforms, sustainable energy systems, and signal processing
Education
  • Ph.D. in Computer and Information Engineering

    The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen)

  • B.Sc. in Mathematics (Hua Loo-Keng Talent Program)

    University of Science and Technology of China (USTC)

Research Topics

  • Efficient optimization algorithms for deterministic or stochastic, large-scale constrained optimization, minimax problems, distributed optimization, bilevel optimization, and neural network training, including deep learning and large model training.

  • Optimization and generalization for machine learning and AI, e.g., adversarial training, diffusion models, transformers.

  • Learning methods for optimization.

  • Data-driven optimization and decision-making under uncertainty, e.g., contextual optimization, robust optimization, offline reinforcement learning, and reward learning for sequential decision-making, with applications to AI, energy, cyber-physical security and signal processing.

Prospective Students

I am looking for students interested in the research areas mentioned above or related topics about optimization, machine learning and their applications to science and engineering starting in 2025. They should have either a strong background in math and good coding skills, or a strong coding ability and adept theoretical insights. Students with backgrounds in mathematics, statistics, computer science, electronic engineering, and other related fields are welcome to apply.

Recent Publications

(2025). Stochastic Smoothed Primal-Dual Algorithms for Nonconvex Optimization with Linear Inequality Constraints. International Conference on Machine Learning (ICML 2025).

Cite

(2025). Addressing misspecification in contextual optimization. International Conference on Machine Learning (ICML 2025).

Cite

(2024). Uniformly Stable Algorithms for Adversarial Training and Beyond. International Conference on Machine Learning (ICML 2024).

Cite

(2024). A Unified Linear Programming Framework for Reward Learning with Offline Human Behavior and Feedback Data. International Conference on Machine Learning (ICML 2024).

Cite

(2024). Stochastic Smoothed Gradient Descent Ascent for Federated Minimax Optimization. International Conference on Artificial Intelligence and Statistics (AISTATS 2024).

Cite

Contact