About Me
Hi! I’m Bohang Zhang (张博航), a fourth-year Phd. student at Peking University, advised by Prof. Liwei Wang. I also work closely with Prof. Di He. Before starting Phd., I finished my undergraduate study at School of the Gifted Young (少年班) in Xi’an Jiaotong University, majored in Computer Science.
My main research area lies in solving fundamental problems in machine learning, such as the expressive power, robustness, and optimization of neural networks, with special interest in studying these problems from a computer science perspective. Currently, my main research focus lies in the following aspects:
- Understanding and Analyzing the expressive power of graph neural networks (GNNs), or more broadly, equivariant neural networks for geometric deep learning.
- Designing neural networks with certified robustness gurantees, i.e. achieving provable robustness under adversarial attacks.
- Previously, I was also interested in designing and analyzing optimization algorithms for efficient neural network training.
If you are interested in collaborating with me or want to have a chat, always feel free to contact me through e-mail or Wechat.
📝 Publications
* means equal contribution. See the Publications page for more details.
- Rethinking the Expressive Power of GNNs via Graph Biconnectivity.
Bohang Zhang*, Shengjie Luo*, Liwei Wang, Di He. In ICLR 2023 (Oral, 1.8% acceptance rate!). Code will be released. - Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective.
Bohang Zhang, Du Jiang, Di He, Liwei Wang. In NeurIPS 2022 (Oral, 1.7% acceptance rate!). [Code] - Boosting the Certified Robustness of L-infinity Distance Nets.
Bohang Zhang, Du Jiang, Di He, Liwei Wang. In ICLR 2022. [Code] - Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons.
Bohang Zhang, Tianle Cai, Zhou Lu, Di He, Liwei Wang. In ICML 2021. [Code] - Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis.
Jikai Jin*, Bohang Zhang*, Haiyang Wang, Liwei Wang. In NeurIPS 2021. - Improved Analysis of Clipping Algorithms for Non-convex Optimization.
Bohang Zhang*, Jikai Jin*, Cong Fang, Liwei Wang. In NeurIPS 2020. [Code]
🎖 Selected Awards
- ACM ICPC World Finalist (ranking 41/135), Porto Portugal, 2019. [Certificate][Certificate]
- ACM ICPC East Asia Continent Final Gold Award (ranking 8/382), Xi’an China, 2018. [Certificate][Certificate]
- ACM ICPC 2nd Runner up (Gold Award, ranking 4/298), Jiaozuo China, 2018. [Certificate]
- Top-10 outstanding student pioneers (ranking 2/10), 2019. Awarded annually to a total of 10 undergraduate students across Xi’an Jiaotong University.
💬 Invited Talks
- Understanding and Improving Expressive Power of GNNs: Distance, Biconnectivity, and WL Tests.
- 2023.3.16. Hosted by Prof. Haggai Maron at Technion. [Slides]
- Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective.
- 2022.12.15. Hosted by Qiongxiu Li at Tsinghua University. [Poster] [Slides]
- 2022.11.10. Hosted by Prof. Yong Liu at Remin University of China. [Slides]
- 2022.12.21. Hosted by CVMart (极市平台). [News] [Poster] [Video] [Slides]
- 2022.11.26. 2022 NeurIPS Meetup China by Synced (机器之心). [News] [Poster] [Slides]
- Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis.
- 2022.3.10. Huawei Noah’s Ark Lab. [Slides]
- Analyzing and Understanding Gradient Clipping in Non-Convex Optimization.
🏫 Professional Services
- Reviewer for ICML 2022, NeurIPS 2022 (top reviewer), ICLR 2023, CVPR 2023, ICML 2023.
- Reviewer for Transactions on Pattern Analysis and Machine Intelligence (TPAMI).
